Search by job, company or skills

Synodus

Data Engineer (Junior/Middle)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Descriptions

  • Participate in building and optimizing data pipelines for clients in on-premises environments: collecting, processing, cleaning, transforming, and supporting data visualization based on project requirements.
  • Assist in setting up infrastructure to support ETL/ELT processes from multiple data sources using SQL and common Big Data technologies.
  • Support the setup and operation of on-premises components such as Apache Spark, Kafka, Hadoop, NiFi, MinIO, Docker/K8S, Trino, Iceberg, depending on the project.
  • Contribute to optimizing pipeline performance, automating manual workflows, improving data transfer efficiency, and enhancing system scalability.
  • Work directly with clients to understand requirements and propose appropriate solutions for on-premises data systems.
  • Collaborate with Data Analysts, Data Scientists, and DevOps engineers to improve system stability, reliability, and performance.
  • Participate in writing technical documentation, deployment processes, and supporting data testing.

Requirements

  • Bachelor's degree in Computer Science, Information Technology, Software Engineering, or a related field.

Junior Level

  • 1 2.5 years of experience as a Data Engineer or Backend Engineer with data-related tasks.
  • Solid programming foundation and proficiency in at least one language: Python / Java / Scala.
  • Understanding of SQL and relational databases (MySQL, PostgreSQL, or similar), plus basic knowledge of distributed/big data systems.
  • Familiarity with Linux and Git.
  • Ability to read and understand technical documentation in English.

Middle Level

  • At least 2.5 4 years of experience as a Data Engineer, preferably with on-premises deployment experience.
  • Strong SQL skills and experience working with RDBMS (especially Oracle), plus hands-on experience with Kafka / Spark / Hadoop / Airflow or similar tools.
  • Proficiency in Python/Java/Scala with solid understanding of file processing, logging, performance optimization, and data testing.
  • Experience with documentation writing, data validation, and pipeline optimization.
  • Good understanding of data modeling and data architecture.
  • Experience with Data Warehouse / Data Lakehouse on-premises projects is a strong advantage.

Benefits

  • Competitive and negotiable compensation package, including additional project bonuses.
  • 13th-month salary and annual performance review
  • Holiday bonuses for four major occasions: April 30, September 2, Lunar New Year, and New Year
  • Company trip, quarterly/monthly team building activities, and internal cultural events
  • Social and health insurance in compliance with Vietnamese law
  • BSH health insurance package covering medical expenses
  • Annual health check-up
  • Union benefits: Birthday gifts, special occasion allowances, etc.
  • Dynamic and open technical environment encouraging knowledge sharing, internal seminars, and sustainable career development
  • Opportunity to work on large-scale projects using cutting-edge technologies

How useful was this post

Click on a star to rate it!

Average Rating / 5. Vote Count

No votes so far! Be the first to rate this post.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145208675