Data Platform Architect (Hadoop and Kafka)

FintechOS

Next generation financial and insurance product management.

Role Overview:

  • Develop and oversee the strategy for the data platform architecture, covering data ingestion, storage, processing, and analytics.

  • Design scalable, high-performance data platforms that align with business goals and industry standards.

  • Create and maintain reliable data ingestion pipelines to gather data from diverse sources smoothly.

  • Integrate external systems, databases, and APIs to consolidate data within the platform.

  • Architect the platform’s storage solutions, including data warehouses, lakes, and marts.

  • Define schemas, partitioning methods, and organizational approaches for efficient data retrieval and analysis.

  • Build data processing and transformation pipelines using technologies such as Apache Spark, Hadoop, or cloud services.

  • Implement and enforce data governance policies to ensure quality, lineage, and privacy of data.

  • Apply security measures, including access control and encryption, to safeguard sensitive information.

  • Design and deploy analytics and reporting tools to enable self-service, visualization, and ad-hoc queries for users.

  • Utilize cloud and big data technologies (AWS, Azure, GCP, Hadoop, Spark, Kafka) to create scalable, cost-efficient platforms.

  • Identify performance bottlenecks and optimize the platform for better resource use and efficiency.

  • Monitor system performance, plan capacity, and carry out tuning activities.

  • Collaborate with engineering teams and business stakeholders to understand needs and guide data platform decisions.

  • Lead and mentor engineers, offering technical support and direction.

  • Keep current with emerging trends and innovations in data management and big data tech.

  • Assess new tools and technologies and recommend their adoption to improve the platform.

Requirements:

  • Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.

  • Proven experience designing and building scalable, high-performance data platforms.

  • Expertise with big data tools, cloud platforms, and processing frameworks (AWS, Azure, GCP, Hadoop, Spark).

  • Strong knowledge of data architecture, modeling, integration, and governance.

  • Experience with data lakes, warehouses, and NoSQL databases.

  • Skilled in data processing and transformation using Spark, Hadoop, or cloud services.

  • Understanding of data security, privacy, and compliance standards.

  • Strong leadership and communication skills to work effectively across teams.

  • Excellent problem-solving abilities to tackle complex data challenges.

  • Experience with data analytics and visualization tools.

Copyright © 2025 hadoop-jobs. All Rights Reserved.