Job Details

JPC-232647 - Data Engineer
Experience:
5 - 8 years
Qualification:
Job Location:
Job Type:
Contract
Skills:
Apache Hadoop, Scala, Apache Spark, PySpark, Spark streaming,Linux/unix
Vacancies:
0
Job Posted: May 14, 2024 | Total views: 1

Job Description:

  • Job Description

     

     

     

    Job Specification - Data Engineering

     

     

     

    Principal Responsibilities:

     

     



      • Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment

     

      • Promoting development standards, code reviews, mentoring, knowledge sharing

     

      • Production support & troubleshooting.

     

      • Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring

     

      • Liaison with BAs to ensure that requirements are correctly interpreted and implemented.

     

      • Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives.  Input into system architecture and design. 

     

      • Peer code reviews. 



     

     

    Must have Requirements:



      • Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+.

     

      • Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, PySpark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).

     

      • Sound knowledge on working Unix/Linux Platform

     

      • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL,PySpark.

     

      • Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.

     

      • Understanding of big data modelling techniques using relational and non-relational techniques

     

      • Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;



     

     

    Good to have Requirements:



      • Experience with time-series/analytics dB’s such as Elastic search.

     

      • Experience with scheduling tools such as Airflow, Control-M.

     

      • Understanding or experience of Cloud design patte
        s

     

      • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

     

      • Experience with developing Hive QL, UDF’s for analysing semi structured/structured datasets.

     

      • Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: BigQuery ,Cloud DataFlow , Cloud DataProc ,Cloud Storage , Cloud DataPrep , Cloud PubSub , Cloud Composer, Cloud BigTable



     

     

     


About Company :
Purview is a leading Digital Cloud & Data Engineering company headquartered in Edinburgh, United Kingdom having a presence in 14 countries India (Hyderabad, Bangalore, Chennai and Pune), Poland, Germany, Finland, Netherlands, Ireland, USA, UAE, Oman, Singapore, Hong Kong, Malaysia and Australia.

We have a strong presence in UK, Europe and APEC, providing services to Captive Clients (HSBC, NatWest, Northern Trust, IDFC First Bank, Nordia Bank etc) in fully managed solutions and co-managed capacity models. Also, we support various top IT tier 1 organisations (Capgemini, Deloitte, Wipro, Virtusa, L&T, CoForge, TechM and more) to deliver solutions and workforce/resources.

Company Info:
IN:
3rd Floor, Sonthalia Mind Space
Near Westin Hotel, Gafoor Nagar
Hitechcity, Hyderabad
Phone: +91 40 48549120 / +91 8790177967

UK:
Gyleview House, 3 Redheughs Rigg,
South Gyle, Edinburgh, EH12 9DQ.
Phone: +44 7590230910
Email: careers@purviewservices.com