About Us
We are a global leader in smart mobility SaaS, serving over 2.3 million subscribers across 23 countries. Our teams are collaborative, fast-growing, and empowered to shape the future of our products and technology.
Explore more about us:
Cartrack (Global): https://www.cartrack.co.za/
Singapore: https://www.cartrack.sg/
Developer Portal: https://developer.cartrack.com/
Our diversified product portfolio spans vehicle recovery, fleet management, and workforce optimization, among others.
This role sits within our Data Platform team, supporting operations across Asia, Africa, and Europe.
About the Role
We are looking for a curious and driven Data Platform Engineer who thrives on solving complex data challenges. You will take ownership of building and optimising scalable data pipelines and platforms that power critical business insights. If you enjoy working in a high-impact environment and pushing data systems to their limits, this role is for you.
Key Responsibilities
- Design, build, and maintain scalable data pipelines and ETL workflows
- Develop monitoring systems to detect and respond to upstream data changes
- Optimise database performance (partitioning, indexing, schema design, parameter tuning)
- Ensure high standards of data quality, lineage, and documentation
- Collaborate with cross-functional teams across regions to deliver reliable data solutions
- Support deployment, debugging, and performance tuning in production environments
Core Technical Skills
- Proficiency in one or more: C, C++, C#, Go, or Rust
- Strong experience with Python
- Hands-on experience with Airflow and complex ETL workflows
- Experience with analytics databases (e.g., ClickHouse, Druid, StarRocks, Doris)
- Experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server, Oracle)
- Familiarity with distributed query engines (e.g., Spark, Trino)
- Solid understanding of database optimisation techniques
- Comfortable working in Linux environments (scripting, monitoring, debugging)
- Experience with Docker and containerisation
- Familiarity with CI/CD pipelines and version control tools (GitLab, GitHub, Bitbucket)
Nice to Have / Growth Areas
- Kubernetes and Helm
- Stream processing for real-time data ingestion
- Feature engineering and collaboration with data scientists
- Experience with AI/LLM-powered data workflows
- Building LLM agents, workflows, and RAG systems (e.g., LlamaIndex, LangGraph, FastAPI)
- Designing low-latency data structures for AI-driven applications
- Qualifications & Experience
- Bachelor's Degree or Advanced Diploma in IT, Computer Science, or Engineering
- 3–5 years of experience in a software or data engineering environment, with hands-on involvement in designing, deploying, and monitoring data systems
- OR 6–10 years of relevant industry experience in lieu of formal qualifications