Zensomy

Location: Chennai / Hybrid
Duration: 6 Months
Type: Internship

Reference ID: INT00126 

Roles overview

We are looking for a Software Engineer Intern to help build our fleet management and   teleoperation platform—the interface between users and autonomous machines

This system enables:

  • Task scheduling and mission planning
  • Live monitoring of robot status
  • Real-time video streaming
  • Remote teleoperation of machines

You will work across frontend and backend systems, contributing to a product that   directly interacts with real-world robots.

Key Responsibilities
  •  Develop and improve web-based interfaces for fleet management
  • Build features for:
    •  Task scheduling and mission control

    • Live system monitoring dashboards

    • Real-time video streaming interfaces

    • Teleoperation controls

  • Design and implement backend services and APIs  
  • Integrate with robotics systems (e.g., autonomy stack, ROS2 pipelines)
  • Handle real-time data flows (telemetry, video, commands)
  • Ensure system reliability, responsiveness, and usability
  • Collaborate closely with autonomy and robotics engineers
Required Qualifications 
  • Pursuing or recently completed a degree in Computer Science or related field  
  • Strong programming skills in:
    • JavaScript / TypeScript
    • Python or Node.js
  •  Experience with
    •  Frontend frameworks (React, Next.js, or similar)
    • Backend development (REST APIs, WebSockets, etc.)
  •  Understanding of:
    • Client-server architecture
    • Real-time systems (basic understanding)
  •  Familiarity with Git and software development workflows 
Preferred Qualifications
  • Experience with:
    • Real-time video streaming (WebRTC, RTSP, etc.)

    • WebSockets or event-driven systems

    • Docker or cloud deployment

  • Basic exposure to Robotics systems or ROS2
  •  Exposure to:

    • Human-machine interfaces (HMI)

    • Control systems or teleoperation

What You’ll Gain
  •  Build a real product used to control autonomous machines
  • Work on end-to-end system design (frontend + backend + robotics integration)
  • Exposure to real-time systems and robotics workflows
  • High ownership and fast learning in a startup environment
  • Potential full-time opportunity based on performance
Application Process

Please submit:

  • Resume/CV   
  • Brief description of relevant projects
  • Links to GitHub / portfolio (if available)

Apply at: careers@zensomy.com

 

Duration: 6 Months  

Type: Thesis / Research Internship
Location: Chennai / Remote

Reference ID: INT00226

Background & Motivation

Autonomous navigation in off-road environments (e.g., forests, agricultural land, construction zones, and rugged terrain) introduces challenges far beyond structured urban driving. These environments lack well-defined geometry, semantic consistency, and reliable priors.

Key difficulties include:

  • Highly unstructured and deformable terrain (mud, sand, vegetation)
  • Significant appearance variability across seasons and weather
  • Ambiguous traversability, where obstacles are not clearly defined
  • Sensor degradation due to dust, lighting variation, and occlusion

To address these challenges, perception systems must move beyond 2D understanding and adopt 3D spatial representations. In particular, 3D occupancy grids combined with semantic segmentation provide a powerful framework for modeling free space, obstacles, and terrain traversability in off-road settings.

Research Challenges

This thesis will explore critical challenges in off-road perception:

  • Traversability estimation: Understanding what terrain is drivable vs hazardous
  • Sparse and noisy sensor data: Especially in LiDAR and vision under harsh conditions
  • Generalization: Robust performance across unseen terrains
  • 3D representation efficiency: Balancing resolution, memory, and compute in occupancy grids
  • Real-time deployment: Ensuring low latency on embedded platforms
  • Limited labeled data: Scarcity of annotated off-road datasets
Objectives

The primary objectives of this thesis are:

  • Develop deep learning models for semantic segmentation of off-road environments
  • Design and implement 3D occupancy grid / voxel-based representations for scene understanding
  • Integrate semantic information into occupancy grids for semantic occupancy mapping
  • Explore sensor fusion techniques (camera + LiDAR or radar) to improve robustness
  • Integrate perception modules into a ROS2-based autonomy stack
  • Optimize models for real-time inference on edge/embedded systems
  • Evaluate system performance across diverse off-road scenarios
Scope of Work

The thesis will involve both research and system-level development:

  • Literature review on:
    • Off-road perception and traversability analysis
    • Semantic segmentation and 3D occupancy mapping
  • Dataset exploration (off-road datasets or custom data collection)
  • Development of deep learning pipelines using PyTorch/TensorFlow
  • Implementation in Python and C++
  • Integration with ROS2-based systems
  • Experimental evaluation under varying terrain and environmental conditions
  • Documentation and thesis/report preparation
Expected Outcomes
  • A perception pipeline focused on:
    • Semantic segmentation of terrain
    • 3D semantic occupancy grid generation
  • Improved understanding of traversable vs non-traversable regions
  • Real-time or near real-time system performance
  • Codebase, experimental results, and final thesis report
  • Potential for research publication or integration into real-world systems
Required Qualifications
  • Final-year undergraduate or postgraduate student in:
    • Computer Science, Robotics, Electrical Engineering, or related field
  • Strong foundation in:
    • Deep Learning and Computer Vision
  • Proficiency in:
    • Python and C++
  • Familiarity with:
    • PyTorch or TensorFlow
    • Robotics systems and ROS2
  • Understanding of:
    • Linear algebra, probability, and 3D geometry
Preferred Qualifications
  • Experience with:
    • Semantic segmentation or scene understanding
    • 3D perception, point clouds, or voxel-based methods
  • Familiarity with:
    • OpenCV, CUDA
    • Simulation or real-world robotics systems

 Exposure to research or prior relevant projects

Supervision & Support
  • Mentorship from engineers and researchers working on real-world off-road autonomy
  • Exposure to practical deployment challenges in unstructured environments

 Access to datasets, tools, and compute resources

Application Process

Please submit:

  • Resume/CV
  • Statement of purpose (highlighting interest in off-road autonomy/research)
  • Links to projects, GitHub, or publications

📩 Apply at: careers@zensomy.com

Location: Chennai / Hybrid
Type: Full-Time

Reference ID: FTE00126

Role Overview

We are looking for a Deep Learning Perception Engineer to design and deploy robust perception systems for off-road autonomy. You will work on 3D scene understanding, semantic segmentation, object detection & tracking, and occupancy grid-based representations, contributing directly to production-grade autonomy stacks.

This role combines research and engineering, with a strong emphasis on real-world deployment and system integration.

Key Responsibilities
  • Develop and deploy deep learning models for:
    • Semantic segmentation of off-road environments
    • Object detection and multi-object tracking
    • 3D occupancy grid / voxel-based scene representation
  • Build semantic occupancy mapping pipelines integrating perception outputs into spatial representations
  • Design and implement multi-sensor fusion systems (camera, LiDAR, radar)
  • Develop high-performance components using Python and C++
  • Integrate perception modules into ROS2 or similar middleware architectures
  • Optimize models for real-time inference on edge/embedded platforms
  • Work closely with robotics, planning, and systems teams to enable end-to-end autonomy
  • Evaluate performance across diverse terrains and environmental conditions

 Contribute to system architecture and technical decision-making

Required Qualifications
  • Bachelor’s / Master’s / PhD in:
    • Computer Science, Robotics, Electrical Engineering, or related field
  • Strong experience in:
    • Deep Learning and Computer Vision
  • Hands-on experience with:
    • Semantic segmentation and/or object detection models
  • Proficiency in:
    • Python and C++
  • Experience with:
    • PyTorch or TensorFlow
    • ROS2 or robotics middleware
  • Solid understanding of:
    • 3D geometry, coordinate systems, and perception pipelines
    • Linear algebra, probability, and optimization
Preferred Qualifications
  • Experience with:
    • 3D object detection, point clouds, or voxel-based methods
    • Multi-object tracking (MOT) and temporal perception systems
    • Occupancy grids, BEV representations, or volumetric mapping
  • Background in:
    • Off-road autonomy or robotics systems or automated driving
  • Familiarity with:
    • CUDA, TensorRT, or model optimization techniques
    • Sensor calibration and fusion pipelines
  • Experience working with real-world datasets or data collection systems
  • Contributions to research, open-source, or deployed systems
What You’ll Work On
  • Perception systems for unstructured, off-road environments
  • Handling challenging terrain conditions (mud, vegetation, uneven surfaces)
  • Designing systems that generalize across highly variable real-world scenarios
  • Integrating spatial (3D occupancy) and temporal (tracking) understanding

 Bridging the gap between research prototypes and deployable systems

What We Offer
  • Opportunity to work on cutting-edge autonomous technology
  • High ownership and impact in a fast-paced startup environment
  • Direct collaboration with founders and core engineering team
  • Competitive compensation and growth opportunities

 Access to real-world deployment and testing environments

Application Process

Please submit:

  • Resume/CV
  • Brief description of relevant projects or experience
  • Links to GitHub, portfolio, or publications (if available)

📩 Apply at: careers@zensomy.com

Technical Skills & Experience

Degree in computer science, information technology, engineering or related (Bachelor, Master, or PhD).

Strong development experience in at least one of the following fields: hands-on embedded code development, safety-critical software development for real-time systems, CI/CD pipelines, ROS/ROS2 middleware, simulation environments (IsaacSim, CARLA, AWSIM).

Strong technical foundation and expertise in at least one of the following domains: deep learning, computer vision, multi-sensor fusion, occupancy grids, reinforcement learning and model predictive control.

Experience in Robotics, driver assistance systems and / or autonomous driving.

Experience with modern C++(14/17/20), Python and embedded software engineering for real-time applications.

Experience with modern software engineering tools such as CI/CD pipelines, Docker and Git.

Ability to work in a dynamic environment with complex technical challenges and requirements.

Readiness to tackle complex challenges and contribute to shaping our product into a cutting-edge autonomy platform for next-generation of mobile machines.

A strong team player with the ability to convey complex technologies clearly and effectively.

Personal Skills & Attributes

Strong team player with excellent collaboration skills.

Ability to communicate complex technical concepts clearly to diverse audiences.

Problem-solving mindset with a willingness to tackle challenging, ambiguous tasks.

Adaptability to work in a fast-paced, evolving environment.

Commitment to continuous learning and personal growth.

Working at Zensomy

Open culture - transparent communication, flat hierarchies, and collaborative decision-making.

Opportunity to shape cutting-edge autonomous technologies with real-world impact.

Ownership of your work – real responsibility and the freedom to shape solutions.

Access to state-of-the-art tools, labs, and test environments.

Continuous learning and professional growth through mentorship, training, and conferences.