Maveric can Accelerate your Next
About the Company
Founded in 2000, Maveric Systems is a niche, domain-led BankTech specialist. The company partners with global banks to address their business challenges through emerging technologies. With over 2750 technology specialists, Maveric helps customers navigate a rapidly changing environment and achieve their goals. Specializing in Data, Digital, Core Banking, and Quality Engineering, Maveric operates across three continents with delivery capabilities in India, the Netherlands, Poland, Singapore, UAE, UK, and the US.
About the Role
As a Hadoop Developer at Maveric Systems, you will play a key role in designing, developing, and optimizing scalable data pipelines. Your work will focus on building robust data ingestion, transformation, and storage solutions for large-scale datasets, ensuring high performance and reliability across distributed systems.
Responsibilities
- Design and optimize scalable data pipelines using Apache Spark, Python, and Hadoop.
- Implement data ingestion, transformation, and storage solutions for large datasets.
- Collaborate with cross-functional teams to understand and translate business requirements into technical solutions.
- Deploy and manage Big Data tools and frameworks, including Kafka, Hive, HBase, and Flink.
- Ensure the quality, integrity, and availability of data across distributed systems.
- Conduct performance tuning and benchmarking of Big Data applications.
- Implement data governance practices, including metadata management and data lineage tracking.
- Stay up-to-date with emerging technologies and integrate them into the data ecosystem as needed.
Required Skills
- 6+ years of experience in software development with a focus on Big Data technologies.
- Strong proficiency in Python programming.
- Hands-on experience with Hadoop, Spark, Kafka, and NoSQL databases.
- Proven experience in building and maintaining ETL/ELT pipelines.
- Familiarity with cloud platforms (AWS, GCP, or Azure) is a plus.
- Strong problem-solving and communication skills.
Preferred Skills
- Experience migrating ETL frameworks from proprietary tools (e.g., Ab Initio) to open-source platforms like Spark.
- Knowledge of machine learning and data analytics tools.
- Financial services or core banking systems experience is a strong plus.