Search by job, company or skills
We are seeking a highly skilled Senior Data Engineer to join our client&aposs growing data team. The ideal candidate will have extensive experience in cloud platforms (GCP, AWS), data engineering frameworks, and strong expertise in ETL processes, data modeling, and orchestration tools. You will play a key role in designing, building, and optimizing data pipelines to support analytics, reporting, and machine learning initiatives.
Key Responsibilities:
Design, develop, and maintain scalable data pipelines using modern data engineering frameworks.
Build and optimize ETL/ELT processes for structured and unstructured data from diverse sources.
Develop and implement data ingestion frameworks ensuring data quality, integrity, and reliability.
Work with DBT and Apache Airflow to create and manage data transformations and orchestrations.
Manage and optimize Snowflake data warehouses, ensuring performance and cost efficiency.
Leverage Kubernetes to deploy, scale, and manage containerized data processing workloads.
Utilize Python and SQL to develop efficient, scalable, and high-performing data solutions.
Design and implement data models to support analytical and operational use cases.
Collaborate with cross-functional teams including Data Scientists, Analysts, and Software Engineers to support business intelligence and machine learning initiatives.
Ensure data governance, security, and compliance best practices are upheld.
Required Qualifications:
5+ years of experience in Data Engineering, with a strong background in ETL/ELT development.
Proficiency in GCP and AWS, with hands-on experience in data storage, processing, and orchestration services.
Strong experience with DBT for data transformation and Airflow for orchestration.
Expertise in SQL for complex querying, performance tuning, and optimization
Hands-on experience with Snowflake, including schema design, performance tuning, and cost optimization.
Strong programming skills in Python for data processing and automation.
Experience working with Kubernetes for managing containerized data workloads.
Strong data modeling skills with a focus on designing scalable and maintainable data architectures.
Experience building and maintaining data ingestion frameworks at scale.
Excellent problem-solving skills and ability to work in a fast-paced environment.
Excellent English communication skills.
Preferred Qualifications:
Experience with streaming data frameworks (e.g., Kafka, Pub/Sub, Kinesis).
Knowledge of data security, compliance, and governance best practices.
Familiarity with CI/CD processes and DevOps methodologies.
What We Offer:
Competitive salary and benefits package.
Opportunity to work with cutting-edge technologies in a data-driven organization.
Collaborative and innovative work environment.
Professional growth and career advancement opportunities.
If you are passionate about building scalable data solutions and have the skills we are looking
for, we would love to hear from you! Apply now to be part of our client&aposs dynamic data team.
Date Posted: 05/09/2025
Job ID: 125533571