Search by job, company or skills

Qualgo Technologies Vietnam

Senior Data Scientist - Computer Vision (Autonomous UAV)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 16 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Qualgo is an R&D center specializing in cybersecurity products and solutions. We are on a mission to build a trusted cyberspace where individuals and businesses can thrive with confidence.

Role Summary:

As a Senior Data Scientist focused on Computer Vision for autonomous UAVs, you will be a key contributor to the design, training, and deployment of on-board perception systems that enable safe, reliable flight. You'll specialize in real-time vision for tasks such as obstacle detection/avoidance, target recognition, landing, and mapping, and you'll work with large, diverse image/video datasets - including simulated data - to deliver production -grade models that run under tight SWaP (size/weight/power) constraints. You'll collaborate closely with autonomy, controls, firmware, and safety/compliance teams to ship a world -class perception stack that meets operational and regulatory requirements across multiple markets.

Key Responsibilities:

1. Model Development & Deployment

  • Design, develop, train, evaluate, and deploy computer vision models for autonomous UAV applications, including:

o Object Detection/Tracking: Dynamic/low-altitude obstacle awareness (people, vehicles, power lines, wildlife).

o Semantic/Instance Segmentation: Terrain understanding, safe-landing-zone detection, drivable/no-fly area masking.

o Depth & 3D Perception: Stereo/mono-depth, optical flow, VIO/SLAM, and 3D reconstruction for GPS-denied navigation.

o Multi-Sensor Fusion: Calibrate/synchronize and fuse RGB, thermal, event cameras, IMU, LiDAR/radar for robust perception.

o Image/Signal Quality Assessment: Auto-exposure/blur/noise checks, sensor health monitoring, and data QA gates.

  • Work with large, heterogeneous datasets (real flight logs, edge cases, nighttime/thermal, adverse weather) plus simulation and synthetic data with domain randomization.
  • Implement data preprocessing, augmentation, active learning, and scalable labeling strategies.
  • Use PyTorch/TensorFlow and OpenCV; productionize models with ONNX/TensorRT, quantization/pruning, and other inference optimizations for edge accelerators (e.g., NVIDIA Jetson).
  • Integrate models into the autonomy stack (ROS 2, PX4/ArduPilot) with real-time interfaces, fault detection, and safe fallback behaviors.
  • Deploy and monitor models in production; build telemetry loops for continuous improvement.

2. Data Analysis & Insights

  • Define and track performance metrics (e.g., mAP, MOTA, latency/FPS, power/thermal budgets) and go/no-go criteria for flight.
  • Analyze failure modes from flight tests and post-flight logs; deliver clear recommendations to product/engineering.

3. Collaboration

  • Partner with product and mission teams to translate requirements (range, altitude, environment, targets) into CV solutions.
  • Work closely with autonomy/controls/firmware to ensure end-to-end system performance and timing guarantees.
  • Collaborate with safety & compliance to align with airspace and operational regulations in target markets (e.g., Vietnam, Thailand, UAE), ensuring appropriate data handling and operational procedures.
  • Share best practices with fellow data scientists; contribute to standards, code reviews, and documentation.

4. Research & Innovation

  • Stay current with CV/robotics literature (Transformers for vision, event-based vision, uncertainty/OOD detection).
  • Explore novel techniques (self-supervised learning, NeRF/occupancy mapping, learned sensor fusion) and evaluate them in SITL/HITL and flight tests.
  • Champion MLOps: experiment tracking, model registry, CI/CD for ML, reproducible training, and model monitoring on-drone and in the cloud.

Qualifications:

  • Bachelor's degree, Master's or PhD in Computer Science, Robotics, EE, or related field (or equivalent experience).
  • 58+ years in data science/ML with a strong focus on computer vision and at least 2+ years deploying models to real-time or safety-critical systems.
  • Proficiency in Python (preferred) and C++ for performance-critical on-board code.
  • Deep knowledge of modern CV: detection/tracking, segmentation, depth/optical flow, VIO/SLAM, 3D geometry (camera models, epipolar geometry, PnP).
  • Hands-on with ROS/ROS 2 and flight stacks (PX4/ArduPilot), including time synchronization and message pipelines.
  • Experience deploying to edge hardware (e.g., NVIDIA Jetson Orin/Xavier) with TensorRT/ONNX; comfort with quantization/pruning and memory/latency trade-offs.
  • Strong sensor calibration experience (mono/stereo cameras, IMU, LiDAR) and fusion (EKF/UKF or learned).
  • Proven track record of production deployments, telemetry-driven iteration, and operating within latency, power, and thermal constraints.
  • Excellent communication; ability to lead/mentor and collaborate across autonomy, firmware, and product.

Preferred:

  • Prior work on autonomous robots/UAVs or other safety-critical platforms.
  • Experience with simulation (AirSim, Gazebo/Ignition), synthetic data, and domain randomization.
  • Familiarity with thermal/event cameras, nighttime/low-light perception, and adverse-weather robustness.
  • Knowledge of mapping (TSDF/occupancy grids, loop closure), NeRF/implicit fields, and multi-view geometry at scale.
  • Exposure to cloud platforms (AWS/GCP/Azure) for data pipelines and model training; Kubernetes is a plus.
  • Experience working under regional regulations/operational constraints (e.g., Southeast Asia/Middle East operations).

Skills:

  • Exceptional technical skills in computer vision and machine learning; strong analytical/problem-solving abilities.
  • Clear, concise communication and cross-functional collaboration.
  • Ability to own problems end-to-end, operate independently, and drive measurable outcomes.
  • Passion for building innovative, high-impact perception solutions that make autonomous flight safer and more capable.
  • Fluency in English

What we offer:

  • Competitive salary and benefits package.
  • 100% salary during probation period.
  • Full insurance contribution based on 100% of salary.
  • Opportunity to work on a product that impacts millions of users.
  • A dynamic and supportive work environment.
  • Premium health insurance for you and your family.
  • Professional growth and development opportunities.
  • Annual leave 12-14 days per year + 1 Birthday Leave + 1 X'Mas
  • Performance review: once per year
  • Internal training/sharing and professional Training courses
  • Team building, company trip, year end party, monthly activities,....
  • Devices: Macbook and screen (If needed)
  • Free tea and coffee
  • Comfortable working Area
  • Working hour: 9am - 6pm from Monday to Friday

Location: The Hallmark Building - 15 Tran Bach Dang, An Khanh Ward, HCMC.

More Info

Job ID: 135157045