Computer Vision Engineer

Global-Talent-Exchange

India
Full time
5 - 8 Yrs
Job Openings: 1

Required Skills:

C++

Python

Computer Vision

AI/ML

PyTorch

CUDA

ROS/ROS2

TensorRT

Docker

Linux

C++

Python

Computer Vision

AI/ML

PyTorch

CUDA

ROS/ROS2

TensorRT

Docker

Linux

Key Responsibilities

  • Design, implement, and maintain core autonomy modules that integrate sensing, perception, state estimation, mapping, and planner interfaces into a cohesive real-time system.
  • Develop high-performance computer vision pipelines (classical + AI-based) for detection, segmentation, tracking, and scene understanding, ensuring reliable operation on embedded hardware.
  • Build multimodal perception systems that fuse camera, LiDAR, radar, and IMU data into accurate, navigation-ready environment representations.
  • Deploy, optimize, and maintain autonomy software on embedded platforms (Jetson AGX/Orin), including TensorRT optimisation, cross-compilation, CUDA acceleration, and performance tuning for real-time execution.
  • Own sensor bring-up, configuration, calibration, and synchronization (camera, LiDAR, radar, IMU, GPS), ensuring accurate and stable data for downstream modules.
  • Ensure system-level robustness and safety by maintaining strict latency budgets, deterministic behaviour, numerical stability, and fall-back mechanisms for degraded sensing conditions.
  • Conduct field trials, capture datasets, analyse system performance, and drive iterative improvements across sensing, perception, fusion, and planning layers.
  • Debug deep autonomy stack issues including timing mismatches, calibration drift, concurrency conflicts, synchronization faults, and hardware–software integration challenges.
  • Build deployment-ready autonomy systems using ROS/ROS2, Docker, systemd services, and reproducible build pipelines tailored for embedded platforms.
  • Collaborate with mechanical, electronics, and systems teams to align autonomy software capabilities with real-world hardware constraints and vehicle dynamics.
  • Contribute to autonomy architecture evolution, influencing design decisions, modularisation strategy, safety mechanisms, and long-term capability roadmap.

Required Technical Skills

  • Required Bachelor’s degree in Robotics, Computer Science, Mechatronics or related field.
  • Master’s or PhD in Robotics, Autonomous Systems, AI/ML, Computer Vision, or Control Systems is preferred.
  • Strong proficiency in modern C++ (14/17/20) and Python for building production-grade robotics, CV, and autonomy software.
  • Deep understanding of computer vision fundamentals (feature-based vision, geometric methods, multi-view geometry) and AI-based perception using PyTorch.
  • Practical experience deploying and optimising perception models on embedded GPU platforms (Jetson Xavier/Orin or similar).
  • Hands-on expertise with Triton, TensorRT, mixed-precision inference, Numba-JIT, CUDA kernels, and real-time optimisation techniques.
  • Strong command of ROS/ROS2, TF transforms, message passing, node graph architecture, and middleware integration patterns.
  • Extensive experience with robotics sensor integration including RGB/stereo/depth cameras, LiDAR, radar, IMUs, and GPS—covering calibration (intrinsic/extrinsic), synchronization, timestamps, and data integrity.
  • Knowledge of core autonomy concepts: mapping, costmap generation, scene representation, obstacle detection, and planner interfacing.
  • Solid grounding in Linux systems, multithreading, memory optimisation, real-time constraints, and system-level debugging workflows.
  • Experience with Docker, cross-compilation toolchains, embedded deployment pipelines, and CI/CD systems for robotics software.
  • Familiarity with simulation tools (Gazebo, CARLA, Isaac Sim) for developing reproducible test setups and automated validation.
  • Ability to troubleshoot complex issues across perception, fusion, hardware interfaces, timing, concurrency, and algorithmic edge cases.
  • Strong understanding of coordinate frames, transforms, camera models, rigid-body geometry, and numerical optimisation methods.
  • Experience using logging frameworks, telemetry tools, performance profilers, and methods for long-duration stability testing.

About Company

Global-Talent-Exchange
https://globaltalex.com/
Discover high-impact roles Worldwide
10-20 Employees
Information Technology & Services