Search by job, company or skills

  • Posted 2 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

JOB DESCRIPTION
  • Design, develop, and maintain scalable ETL pipelines for large-scale data processing.
  • Clean, transform, and standardize data from multiple internal and external sources.
  • Integrate data from enterprise systems including ERP, MES, and factory systems.
  • Build automation scripts and schedule data workflows to ensure reliable data delivery.
  • Work with big data technologies such as Spark, Hadoop, or equivalent frameworks.
  • Manage cloud-based storage, data lakes, and overall data architecture.
  • Ensure data quality, accuracy, scalability, and security across all data pipelines.
REQUIREMENTS
  • Bachelor's degree in Computer Science, Information Systems, or a related field.
  • Minimum 3 years of experience in Data Engineering or similar roles.
  • Strong proficiency in SQL and Python.
  • Solid experience in designing and maintaining ETL pipelines and data transformation processes.
  • Hands-on experience with Spark, Hadoop, or other big data frameworks.
  • Experience integrating data from enterprise systems (ERP/MES).
  • Knowledge of cloud storage solutions and data lake architecture.
  • Ability to develop automation scripts and manage data scheduling workflows.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 137386817

Similar Jobs