Senior Hadoop Engineer

EXL

We make sense of data to drive your business forward. #MakeSenseofData #DriveYourBusinessForward #PartnerYourWay

About the Company

EXL is a leading provider of digital transformation services, specializing in data science, operations management, and business evolution. With a strong focus on collaboration and tailoring solutions to meet unique business needs, EXL partners with clients to optimize the use of data, enhance decision-making, and improve overall business efficiency. Through a combination of AI, analytics, and digital interventions, EXL drives competitive advantage and enables clients to outperform the competition.

About the Role

EXL is seeking an experienced Senior Hadoop Engineer to work on large-scale data solutions and multi-tier software development. This hybrid role, located in Foster City, CA, or Houston, TX, requires expertise in Big Data architecture and the Google Cloud Platform (GCP). As a Senior Hadoop Engineer, you will contribute to the design, development, and maintenance of scalable data pipelines while collaborating closely with engineers, operations, and business stakeholders to deliver high-quality solutions. This role offers the opportunity to lead end-to-end development processes, from architecture to deployment.

Responsibilities

  • Design, develop, and maintain multi-tier software and large-scale data solutions.
  • Own the full software lifecycle: architecture, coding, testing, and deployment.
  • Collaborate with cross-functional teams, including engineers, ops, and business stakeholders to deliver scalable, efficient solutions.
  • Ensure code quality, writing well-tested (unit tests in Golang), reusable, and maintainable code.
  • Deliver features and sub-systems on time and according to specifications.
  • Drive innovative solutions through excellent problem-solving and system design thinking.

Required Skills

  • 10+ years of hands-on software development experience.
  • Strong expertise in algorithms, data structures, object-oriented programming (OOP), and design patterns.
  • Proficiency in multi-threaded programming, troubleshooting, and debugging.
  • Extensive experience with Google Cloud Platform (GCP) development.
  • Strong knowledge of SQL (BQ, Hive, Spark SQL), Spark job debugging, and performance tuning.
  • Proficient in Python and PySpark.
  • Experience in Big Data architecture, including Hadoop, Hive, SQL, and Airflow.
  • Expertise in designing end-to-end data pipelines.
  • Familiarity with Unix systems and GitHub for version control.

Preferred Qualifications

  • Experience in client and stakeholder management.
  • Leadership experience with offshore teams and delivery management.
  • Relevant certifications in big data, GCP, or related technologies.
  • Experience in building high-performance, scalable data architectures.

Explore the complete job description by visiting the official website provided:

Copyright © 2025 hadoop-jobs. All Rights Reserved.