FintechOS
Next generation financial and insurance product management.
About the Company
FintechOS is focuses on delivering advanced data platform solutions by leveraging modern big data and cloud technologies to support business analytics and data management needs.
About the Role
The position involves developing and managing the strategy for a scalable data platform architecture. The role covers data ingestion, storage, processing, and analytics, with an emphasis on aligning technical solutions with business objectives and industry best practices.
Responsibilities
- Design scalable and high-performance data platforms tailored to business goals.
- Develop and maintain reliable data ingestion pipelines from multiple sources.
- Integrate external systems, databases, and APIs to consolidate data within the platform.
- Architect storage solutions, including data warehouses, lakes, and marts.
- Define data schemas, partitioning, and organizational methods for efficient access and analysis.
- Build data processing and transformation pipelines using Apache Spark, Hadoop, or cloud services.
- Implement and enforce data governance policies ensuring data quality, lineage, and privacy.
- Apply security measures such as access control and encryption to protect sensitive data.
- Design and deploy analytics and reporting tools for self-service, visualization, and ad-hoc queries.
- Utilize cloud and big data technologies (AWS, Azure, GCP, Hadoop, Spark, Kafka) to develop scalable, cost-efficient platforms.
- Identify and resolve performance bottlenecks to optimize resource usage and platform efficiency.
- Monitor system performance, plan capacity, and conduct tuning activities.
- Collaborate with engineering teams and business stakeholders to align platform decisions with needs.
- Lead and mentor engineers, providing technical guidance and support.
- Stay updated on emerging data management trends and technologies.
- Evaluate new tools and technologies, recommending adoption to improve the platform.
Required Skills
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
- Proven experience designing and building scalable, high-performance data platforms.
- Expertise with big data tools, cloud platforms, and processing frameworks such as AWS, Azure, GCP, Hadoop, and Spark.
- Strong knowledge of data architecture, modeling, integration, and governance.
- Experience with data lakes, warehouses, and NoSQL databases.
- Proficiency in data processing and transformation using Spark, Hadoop, or cloud services.
- Understanding of data security, privacy, and compliance standards.
- Strong leadership and communication skills for effective cross-team collaboration.
- Excellent problem-solving skills to address complex data challenges.
- Experience with data analytics and visualization tools.