Senior Hadoop Administrator

InfoCepts

Improving business results through more effective use of data, AI & user-friendly analytics

About the Company

A leading company focused on developing and scaling next-gen Big Data platforms, committed to delivering cutting-edge solutions for data-driven applications. With a strong focus on AWS, the team drives efficiency and innovation by maintaining and scaling big data infrastructure. The mission is to provide robust, reliable platforms that power data insights and operational excellence across teams.

About the Role

The Hadoop Administrator will play a crucial role in designing, building, and maintaining Big Data infrastructure across both on-prem and AWS platforms. This role is integral in ensuring the performance, reliability, and security of the Big Data platform while enabling a seamless experience for developers and analysts. Additionally, the role will involve automation, troubleshooting, and ensuring high availability and security of the system.

Responsibilities

  • Design, build, and scale Big Data infrastructure in both AWS and on-prem environments.
  • Ensure the end-to-end availability of Big Data platforms.
  • Improve the efficiency, reliability, and security of Big Data infrastructure.
  • Automate platform setup and day-to-day operational tasks.
  • Create custom tools for system automation and maintenance.
  • Set standards for the production environment and ensure smooth operation.
  • Participate in a 24×7 on-call rotation, responding to alerts and investigating platform issues.
  • Work with monitoring tools like CloudWatch, CloudTrail, and Lambda to proactively detect and resolve issues.
  • Collaborate with data engineers and developers to scale their efforts using Kubernetes, Docker, and other modern technologies.

Required Skills

  • Strong experience with Hadoop ecosystem (HDFS, Yarn, Hive, Spark, Oozie, Presto, Ranger).
  • Expertise in Amazon EMR and AWS services.
  • In-depth knowledge of Hadoop security, including SSL/TSL, Kerberos, and role-based authorization.
  • Performance tuning experience with Hadoop clusters and MR/Spark jobs.
  • Experience with infrastructure automation tools like Terraform, CI/CD pipelines (Git, Jenkins), and Ansible.
  • Proficiency in Bash, Python, or Java for automation and scripting.
  • Understanding of JRE/JVM and GC tuning.
  • Hands-on experience with RDBMS (e.g., Oracle, MySQL) and SQL.
  • Familiarity with Snowflake, Qubole, and Airflow is a plus.
  • Experience with Docker and Kubernetes for scalability in data engineering projects.

Preferred Qualifications

  • Hands-on experience with Snowflake, Qubole, and Airflow.
  • Experience with monitoring solutions (CloudWatch, Lambda) for proactive system management.
  • Background in data engineering with strong focus on high availability, performance, and security.

Why Join?

  • Work on cutting-edge technologies in a fast-paced, innovative environment.
  • Play a key role in shaping the future of Big Data infrastructure.
  • Collaborate with top-tier professionals in an agile and diverse team.
  • Be part of a growing company offering career development opportunities and a supportive, flexible work culture.

Full details of this position are available on the official website linked below:

Copyright © 2025 hadoop-jobs. All Rights Reserved.