Skip to Main Content

Job Title


Expert Machine Learning Perception Engineer – Self-Driving


Company : Hexad Infosoft IN


Location : Amritsar, Punjab


Created : 2026-01-29


Job Type : Full Time


Job Description

Job Description:Position Title: Expert Machine Learning Perception Engineer – Self-DrivingWork Location: BangaloreModel: Hybrid / Work from Office (as per business requirement)Experience: 5+ yearsNumber of Positions: 3Role Requirement:Immediate Joiners to candidates available within 30 days can applyAbout the Role:We are looking for a highly skilled and innovative Expert Machine Learning Perception Engineer to join our autonomous driving team. This role is ideal for candidates with strong expertise in 3D perception, multi-sensor fusion, and foundation models, who are passionate about building production-grade perception systems for self-driving applications. The ideal candidate will have a strong research mindset combined with hands-on experience in deploying scalable and robust machine learning solutions.Key Responsibilities:• Develop, prototype, and optimize algorithms for 3D object detection, occupancy detection, multi-object tracking, semantic segmentation, and 3D scene understanding using multi-sensor data (Camera, LiDAR, Radar, etc.). • Design and adapt foundation models for self-driving perception to enable scalable and generalizable representations across tasks and sensor modalities. • Build and enhance multi-sensor fusion pipelines to improve robustness across diverse and challenging driving scenarios. • Research, evaluate, and implement state-of-the-art machine learning and computer vision techniques such as BEV perception, occupancy networks, multimodal fusion, and end-to-end perception models. • Translate research outcomes into production-ready solutions ensuring scalability, robustness, and efficiency. • Optimize deep learning models for real-time inference on automotive-grade hardware. • Design and conduct evaluation, benchmarking, and validation of perception models using real-world datasets and simulation environments. • Collaborate closely with cross-functional teams to integrate perception models into the full autonomous driving stack. • Contribute to data curation, annotation strategies, and scalable training pipelines to accelerate perception development. • Write high-quality, efficient, and well-tested code while promoting engineering best practices. • Stay up to date with the latest advancements in Computer Vision, Deep Learning, and Robotics.Mandatory Skills & Qualifications:• MSc or PhD in Computer Science, Electrical Engineering, Robotics, or a related field. • 5+ years of relevant industry experience in machine learning, perception, or autonomous systems. • Strong foundation in Machine Learning, Computer Vision, and 3D Geometry. • Hands-on experience with architectures for 3D object detection and multi-object tracking. • Familiarity with foundation models for vision or multimodal learning, including large-scale pre-training, transfer learning, and self-supervised learning. • Proficiency in Python and/or C++. • Strong experience with modern ML frameworks such as PyTorch or TensorFlow. • Hands-on experience handling 3D point cloud data. • Knowledge of multi-sensor calibration and sensor fusion techniques. • Strong software engineering skills with a focus on scalability, reliability, and performance optimization. • Ability to take algorithms from research to production deployment. • Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.Nice-to-Have (Preferred):• Experience with autonomous driving or ADAS perception systems. • Background in SLAM, 3D reconstruction, occupancy networks, or sensor simulation. • Proven publication record in top-tier conferences.How to Apply: