Search by job, company or skills

M

Machine Learning Engineer (VLA)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 20 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

The Role

We are looking for a machine learning engineer with deep experience in multimodal models and a genuine interest in robotics and embodied intelligence. You will work across the full model development lifecycle from architecture design and training to evaluation and deployment contributing directly to the models that power our world model, Shadow mapping engine, and VLA policy layer.

This is not a role for someone looking to fine-tune existing models. You will be working on novel architectures, new training paradigms, and problems at the frontier of physical AI.

What You Will Work On

  • Developing and training multimodal architectures that fuse vision, tactile, force, and kinematic data streams into unified representations
  • Building and iterating on VLA policy models that generalise across robot types and unseen environments
  • Designing and running training pipelines on large-scale multimodal datasets combining internet video, synthetic simulation data, and DK-1 sensorimotor capture
  • Collaborating with hardware and robotics engineers to close the loop between model outputs and real robot behaviour

Required Skills

  • Strong Python with deep hands-on experience in PyTorch this is a PyTorch shop
  • Solid fundamentals in machine learning and deep learning you understand what is happening inside the models you build, not just how to run them
  • Working knowledge of modern AI paradigms LLMs, vision models, multimodal architectures
  • Experience with transformer architectures including diffusion models and autoregressive approaches
  • Experience running end-to-end model development workflows data pipelines, training runs, evaluation, iteration
  • Comfortable operating in a distributed, multicultural, fast-moving team with strong async communication habits
  • Strong written and spoken English for daily technical discussion

Preferred Qualifications

  • Experience with VLA models, embodied AI, or multimodal foundation models Pi Zero, OpenVLA, Octo, or similar
  • Background working with non-standard sensor modalities tactile, force, proprioception, kinematic streams
  • Experience with physics-grounded or symmetry-aware architectures Lagrangian networks, Hamiltonian mechanics, equivariant models
  • Familiarity with sim-to-real transfer, domain randomisation, or reinforcement learning in robotics contexts
  • Experience with model optimisation or deployment at inference quantisation, distillation, real-time control loops
  • Proficiency in C++ or systems-level programming for robot integration
  • MSc or PhD in machine learning, robotics, computer vision, or a related field or equivalent research experience and open-source contributions
  • Publications at NeurIPS, ICML, ICLR, CoRL, ICRA, or similar venues are a strong signal

What We Offer

  • Direct collaboration with world-class researchers in robotics and machine learning
  • Early-stage equity in a company building foundational infrastructure for the physical AI era
  • Fully remote with flexible working arrangements
  • Access to proprietary hardware, datasets, and compute infrastructure not available anywhere else
  • The chance to publish research under the Motoniq Labs Physical AI Foundation banner

How to Apply

Send a short note on what you have built, what you are currently working on, and why this problem matters to you. Link to any relevant work GitHub, papers, demos. We do not require a formal CV but welcome one if you have it.

  • Applications to: [[Confidential Information]]

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145245395