Software Engineer – Hadoop Big Data Platform

  • Full Time
  • Krakow
Vertex Agility

We provide agile, on-demand tech teams.

We’re seeking an experienced Software Engineer to support the development and automation of tools for our on-premise Hadoop-based big data platform. You’ll be involved in backend/frontend development, CI/CD automation, and Hadoop cluster operations. This hybrid role (6 office days/month) is based in Kraków and requires strong English skills (B2).

Key Responsibilities:

  • Develop and enhance features within the Hadoop ecosystem (Cloudera).

  • Automate platform onboarding, access, and maintenance (Linux, HDFS, Kerberos, Spark).

  • Build internal APIs, UIs, and containerized services.

  • Optimize Spark jobs and manage CI/CD workflows.

  • Collaborate with engineering and architecture teams on platform scalability and resilience.

Requirements:

  • 5+ years of Big Data experience, especially in Cloudera stack.

  • Deep understanding of Hadoop tools: Hive, Spark, Kafka, YARN, HDFS, Zookeeper.

  • Skilled in Linux, shell scripting, Apache Ranger/Knox, and Kerberos.

  • Familiarity with Jenkins, Ansible, Agile methods, and real-time data tools.

  • Experience with S3-compatible storage (e.g., VAST) is a plus.

What’s Offered:

  • B2B contract with long-term, stable engagement.

  • Private healthcare, life insurance, and Multisport access.

Copyright © 2025 hadoop-jobs. All Rights Reserved.