Search by job, company or skills

Flow

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Why Flow

Unique opportunity to join a global fintech start-up and be part of the core-team that is building a global SaaS platform for the worlds 17,000 banks, 40+ million merchants and 7Bn cardholders. Flow Networks offers a white-label payment data activation platform to banks and merchants to engage their customers at the payment moment.

Reporting to the Business, Product, and Solution Architect, the Data Engineer will be responsible for structuring data flows, figuring how data is captured/moved/transformed/combined across the platform, and how to extract as well as visualize business insights from there.

Key Responsibilities

  • Own the design, development, and operation of Flow's data platform, including real-time and batch pipelines for ingestion, transformation, and serving using Databricks, Spark, Delta Lake, Kafka, and BI tools.
  • Build high-quality, analytics-ready data models that power product features such as personalization, eligibility logic, frequency control, and next-best-action decisioning.
  • Partner closely with Product, GTM, and Engineering teams to translate product requirements into scalable data assets and metrics that directly influence product behavior.
  • Own and evolve the data catalog, semantic layer, and metric definitions, ensuring consistency, discoverability, and trust across the organization.
  • Design data systems that support experimentation and measurement, including test/control assignment, exposure tracking, outcome attribution, and incremental lift analysis.
  • Investigate, reconcile, and resolve data quality issues, proactively improving data reliability, observability, and monitoring.
  • Contribute to applied machine-learning workflows by building feature pipelines, training datasets, and model evaluation frameworks used for personalization and decisioning.
  • Continuously identify platform gaps and performance bottlenecks, and drive improvements in data architecture, scalability, and developer productivity.

You should have

  • Bachelor's degree in Computer Science, Engineering, or equivalent practical experience; startup or high-growth product experience preferred.
  • 5+ years of experience in data engineering, analytics, or data science, with strong ownership of production data systems.
  • Deep experience with distributed data processing (Spark), modern data platforms (Databricks), and lake house architectures (Delta Lake).
  • Experience building or supporting real-time and streaming pipelines, preferably using Kafka, Spark Streaming, or Flink.
  • Proven ability to work closely with product and engineering teams to define data requirements and influence product decisions.
  • Solid understanding of experimentation and measurement concepts (test/control, exposure tracking, incrementality), especially in personalization or engagement systems.
  • Hands-on experience enabling machine-learning workflows, including feature engineering, training data pipelines, and model evaluation support.
  • High degree of autonomy, strong debugging skills, and a mindset of continuously improving data quality, reliability, and usability.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 136414625

Similar Jobs