Hadoop Big Data Developer

Synechron

Synechron is a leading digital transformation consulting firm focused on the financial services & big tech industries.

About the Company

Synechron is a leading global consulting firm focused on providing innovative digital solutions, with a strong presence in industries such as financial services and technology. Synechronโ€™s AI-first, telecom-native approach helps clients navigate the evolving digital landscape. With a workforce of 14,500+ professionals across 30+ countries, Synechron has earned recognition for delivering cutting-edge services such as cloud solutions, DevOps, AI/ML, and data engineering.

About the Role

Big Data Developer at Synechron will design, develop, and optimize scalable data processing solutions using Hadoop ecosystem components. This role focuses on creating high-performance data pipelines for real-time and batch data processing, ensuring system reliability and security. The ideal candidate will work with cross-functional teams, implementing data solutions that meet business needs while maintaining best practices in data security, performance, and governance.

Responsibilities

  • Design and develop data processing pipelines using Hadoop ecosystem components (HDFS, Hive, Impala, Phoenix).
  • Build and optimize SQL-based queries and leverage tools like Impala, Phoenix for data management.
  • Develop dashboards, reports, and data access interfaces using Hue.
  • Implement caching solutions (e.g., Redis, Couchbase) to enhance system performance.
  • Collaborate with teams to understand data requirements and design scalable solutions.
  • Tune and optimize data pipelines for performance, ensuring efficient data processing.
  • Monitor, troubleshoot, and resolve issues with data pipelines and storage systems.
  • Stay updated with industry trends and best practices in big data technologies.

Required Skills

  • 8+ years of experience in big data development, specifically with Hadoop and related components.
  • Proficiency in Hive, Impala, Phoenix, and Hue.
  • Experience with caching technologies such as Redis and Couchbase.
  • Strong command of SQL and scripting languages like Python, Java, or Shell.
  • Expertise in data modeling, ETL processes, and performance tuning.
  • Knowledge of data security, access controls, and data governance.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Ability to troubleshoot complex data issues and work collaboratively.

Preferred Qualifications

  • Certifications in Hadoop, Big Data, or related technologies.
  • Experience with large-scale data warehousing solutions.
  • Familiarity with tools like Spark, Kafka, Oozie, and Flink.
  • Understanding of caching strategies and in-memory data grids.

To learn more about this role, please check the official website listed below:

Copyright © 2025 hadoop-jobs. All Rights Reserved.