Search by job, company or skills

LHV Software

Senior Data Engineer (HCMC, Hybrid)

This job is no longer accepting applications

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 months ago

Job Description

  • Location: Ho Chi Minh City, Viet Nam
  • Job Type: Full-time
  • Experience : 5-10 years

Job Summary:

We are seeking a skilled Big Data Engineer with 510 years of experience in building and

maintaining scalable, high-performance data pipelines and processing frameworks. The

ideal candidate will have strong hands-on expertise in orchestration using Apache Airflow and

distributed data processing with Apache Spark. This role requires a solid understanding of big

data architecture, data engineering best practices, and a commitment to delivering efficient,

reliable, and maintainable data solutions that align with business and technical requirements.

Roles and Responsibilities:

  • Design, build, and manage scalable and reliable data pipelines using Apache Airflow.
  • Develop and optimize large-scale data processing workflows using Apache Spark both
  • batch and Structured Streaming.
  • Work closely with data architects, analysts, and business stakeholders to translate data
  • requirements into efficient data engineering solutions.
  • Ensure the quality, performance, and security of data processes and systems.
  • Monitor, troubleshoot, and optimize data workflows and job execution.
  • Document solutions, workflows, and technical designs.

Qualifications:

Required:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or related

field.

  • 510 years of experience in data engineering or related roles.
  • Strong experience with Apache Airflow for data orchestration and workflow
  • management.
  • Proven expertise in building and tuning distributed data processing applications using
  • Apache Spark (PySpark) both Structured Streaming and Batch.
  • Solid understanding of big data platforms and cloud-based data ecosystems (AWS).
  • Proficient in SQL and working with large datasets from various sources (structured and
  • unstructured).
  • Experience with data lakes, data warehouses, and batch/streaming data architectures.
  • Familiarity with CI/CD pipelines, version control, and DevOps practices in a data context.
  • Strong problem-solving and communication skills, with the ability to work independently

and collaboratively.

Preferred:

  • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka).
  • Experience working in Agile/Scrum environments.
  • Knowledge of data quality frameworks and validation engines.
  • Experience with data catalog tools.

Benefits

  • Hybrid working model (work from home + office).
  • 15 days annual leave, Monday-Friday schedule.
  • MacBook Pro provided.
  • Attractive salary & fully paid insurances.
  • 13th-month salary (paid before Lunar New Year).
  • Premium healthcare insurance (PVI).
  • Allowances: team building, sports activities, parking fees, electricity & internet fees.
  • Training courses (English, IT, soft skills).
  • Annual company trip (4 days including weekends).
  • GoodLife program: quarterly trips & sports clubs.
  • Annual health check-up.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135910211

Similar Jobs