Big Data Engineer (with Hadoop)

Adroit People Limited

A leading technical talent search firm providing staffing services in the UK, Europe, USA, Asia-Pacific and Middle East.

About the Company

Adroit People Ltd is a leading technology firm specializing in staffing, IT consulting, implementation, and support services. Serving clients across the UK, Europe, USA, Asia-Pacific, and the Middle East, Adroit People Ltd is dedicated to delivering innovative solutions that address the evolving needs of businesses in today’s competitive market. With a team of experienced consultants, the company provides value-driven solutions, ensuring customer satisfaction and building long-lasting relationships.

About the Role

The Big Data Engineer will play a pivotal role in designing, optimizing, and maintaining data pipelines using Hadoop and Spark. This role involves working with cutting-edge big data technologies to process large datasets and ensure smooth data operations. The successful candidate will work closely with cross-functional teams to implement ETL processes, maintain data quality, and contribute to modernization efforts, including cloud and Databricks integration.

Responsibilities

  • Design, build, and optimize ETL pipelines using Hadoop ecosystem tools such as HDFS, Hive, and Spark.
  • Collaborate with development and data teams to ensure efficient and reliable data processing.
  • Perform data modeling, quality checks, and system performance tuning to ensure high data quality.
  • Contribute to the modernization of existing infrastructure, focusing on potential integration with cloud services and Databricks.
  • Monitor and troubleshoot data workflows to ensure minimal downtime and maximum performance.
  • Maintain and update data systems to meet changing business needs and technological advancements.

Required Skills

  • Strong experience with Hadoop ecosystem tools: HDFS, Hive, Spark, Impala, Oozie, and Airflow.
  • Proficiency in Java, Scala, or Python for developing data processing frameworks.
  • Experience with ETL pipeline development and optimization.
  • Solid understanding of cloud platforms and services like S3, Athena, EMR, Redshift, Glue, Lambda, and Databricks.
  • Ability to perform data modeling, quality checks, and system performance tuning.
  • Experience with big data processing and cloud-based data solutions.
  • Familiarity with data and AI platforms such as Databricks.

Preferred Qualifications

  • Experience in integrating cloud services (AWS, GCP, or Azure) with big data platforms.
  • Knowledge of data lake architecture and managing large-scale datasets.
  • Prior experience working in Agile environments and collaborating with cross-functional teams.
  • Familiarity with containerization technologies such as Docker.
  • Experience with SQL and NoSQL databases, including relational and distributed systems.

For additional information and the full job description, visit the link to our official website below:

Copyright © 2025 hadoop-jobs. All Rights Reserved.