Job Details

JPC-2245 - Hadoop Data Engineer
Experience:
5 - 8 years
Qualification:
Job Location:
London and Bristol
Job Type:
Full Time
Skills:
Spark - mandatory Java/Scala Hive Kafka, Scoop, Flume and Oozie
Vacancies:
5
Job Posted: May 15, 2024 | Total views: 1

Job Description:

  • Job Description

     

    Mandatory SKill:Spark, Java/Scala,Hive,
    Kafka, Scoop, Flume and Oozie

     

    Secondary Skill:Jenkins, Urbancode

     

     

     

    Software Engineer (Big Data)

     

     

     

    Role Responsibilities

     

     

     

    ·        Build solutions that ingest data from source systems into our big data platform, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes

     

    ·        Create high quality code that is able to effectively process large volumes of data at scale

     

    ·        Put efficiency and innovation at the heart of the design process to create design blueprints (patte
    s) that can be re-used by other teams delivering similar types of work

     

    ·        Use mode
    engineering techniques such as DevOps, automation and Agile to deliver big data applications efficiently

     

    ·        Produce code that is in-line with team, industry and group best practice using a wide array of engineering tools such as GHE (Github Enterprise), Jenkins, Urbancode, Cucumber, Xray etc

     

    ·        Work as part of an Agile team, taking part in relevant ceremonies and always helping to drive a culture of continuous improvement

     

    ·        Work across the full software delivery lifecycle from requirements gathering/definition through to design, estimation, development, testing and deployment ensuring solutions are of a high quality and non-functional requirements are fully considered

     

    ·        Consider platform resource requirements throughout the development lifecycle with a view to minimising resource consumption

     

    ·        Once Cloud is proven within the bank, help to successfully transition on-prem applications and working practices to GCP

     

     

     

    About You

     

    ·        Prior experience doing technical development on big data systems (large scale Hadoop, Spark, Beam, Flume or similar data processing paradigms), and associated data transformation and ETL experience.

     

    ·        Coding experience in Java or Scala

     

    ·        Hive, Pig, Sqoop and knowledge of Data transfer technologies such as Kafka, Attunity, CDC are a bonus

     

    ·        GCP or Cloud expertise is a plus

     

    ·        Passionate about data and technology

     

    ·        Excellent people and communication skills, able to communicate with technical and non-technical colleagues alike

     

    ·        Good team player with a strong team ethos

     

     

     

     


About Company :
Purview is a leading Digital Cloud & Data Engineering company headquartered in Edinburgh, United Kingdom having a presence in 14 countries India (Hyderabad, Bangalore, Chennai and Pune), Poland, Germany, Finland, Netherlands, Ireland, USA, UAE, Oman, Singapore, Hong Kong, Malaysia and Australia.

We have a strong presence in UK, Europe and APEC, providing services to Captive Clients (HSBC, NatWest, Northern Trust, IDFC First Bank, Nordia Bank etc) in fully managed solutions and co-managed capacity models. Also, we support various top IT tier 1 organisations (Capgemini, Deloitte, Wipro, Virtusa, L&T, CoForge, TechM and more) to deliver solutions and workforce/resources.

Company Info:
IN:
3rd Floor, Sonthalia Mind Space
Near Westin Hotel, Gafoor Nagar
Hitechcity, Hyderabad
Phone: +91 40 48549120 / +91 8790177967

UK:
Gyleview House, 3 Redheughs Rigg,
South Gyle, Edinburgh, EH12 9DQ.
Phone: +44 7590230910
Email: careers@purviewservices.com