Full Stack Engineer (with Hadoop)

Antal Poland

Recruitment excellence. Every day we create a perfect match between Employees and Employers across CEE.

About the Company

Antal is a leading recruitment firm with over two decades of experience, specializing in the recruitment of professionals and managers. With a strong presence across 35 countries, including Poland, Czech Republic, and Slovakia, Antal excels in providing flexible HR solutions, including consulting, employer branding, and market research.

About the Role

The Full Stack Engineer – Hadoop will join a dynamic Data Platform team to work on large-scale global data solutions. The role focuses on modernizing and automating a hybrid platform that spans on-premises and multi-cloud environments, improving data accessibility and enabling innovation. You will contribute to platform resilience, automation tool development, and enhancing developer experience for data engineering teams.

Key Responsibilities

  • Automation Development: Design and build automation tools to integrate within a complex platform ecosystem.
  • Hadoop Big Data Platform Support: Provide technical support for Hadoop platforms (Cloudera preferred) and manage user access/security protocols (Kerberos, Ranger, Knox, TLS).
  • CI/CD Pipeline Management: Implement and maintain CI/CD pipelines using Jenkins and Ansible for seamless software deployment.
  • System Performance and Monitoring: Perform capacity planning, system monitoring, and optimize performance to ensure operational efficiency.
  • Collaborative Development: Work closely with architects and developers to design scalable, resilient solutions across the organization.
  • Process Improvement: Analyze and streamline existing processes to reduce complexity and automate manual tasks.
  • Tooling Development: Improve engineering tooling for platform management to enhance operational support.

Required Skills & Experience

  • Minimum of 5 years of experience in engineering Big Data environments (on-premises or cloud).
  • Strong expertise in the Hadoop ecosystem (Hive, Spark, HDFS, Kafka, YARN, Zookeeper).
  • Hands-on experience with Cloudera setup, upgrades, and performance tuning.
  • Proficient in scripting (Shell, Linux utilities) and Hadoop system management.
  • Knowledge of security protocols (Apache Ranger, Kerberos, Knox, TLS, encryption).
  • Experience in large-scale data processing and optimizing Apache Spark jobs.
  • Familiarity with CI/CD tools like Jenkins and Ansible for infrastructure automation.

Preferred Qualifications

  • Experience with real-time data processing and integration.
  • Familiarity with cloud-based Big Data platforms (AWS, Azure, GCP).
  • Background in optimizing and automating complex data pipelines.

Head to the official website below for the full vacancy description and requirements:

Copyright © 2025 hadoop-jobs. All Rights Reserved.