Multi-Modality Model Engineer

New

Skills

LIDAR Pytorch

Job Overview

Join our team to build, pre-train, and evaluate large-scale multi-modality foundation models. This role involves aligning diverse data streams such as Vision, LiDAR, Radar, Language, and Audio to define an ML roadmap for deploying multi-modality representations in vehicles.

Responsibilities
  • Build, pre-train, and evaluate multi-modality foundation models
  • Architect Knowledge Distillation pipelines for model compression
  • Create training/evaluation datasets for cross-modal learning
  • Collaborate with perception teams to validate on-board performance
Requirements & Qualifications
  • MS/PhD in CS/ML or related field
  • Experience with building/training large VLMs
  • Strong cross-modal alignment and pre-training skills
  • Proficiency in PyTorch and large-scale ML pipelines
  • Experience in autonomous driving or robotics

Job Type: Remote

Salary: Not Disclosed

Experience: Entry

Duration: 12 Months

Share this job:

overtime