Multimodal Recurrent Depth Estimation for Autonomous Vehicles

Institut
Lehrstuhl für Fahrzeugtechnik
Typ
Bachelorarbeit / Semesterarbeit / Masterarbeit /
Inhalt
experimentell / theoretisch /  
Beschreibung

EDGAR, the new research vehicle for autonomous driving at TUM, is currently under construction. At the same time, an overall software is being developed that will enable fully autonomous driving in urban environments.

In previous theses, algorithms have been realized to estimate the depth of 2D camera images using different sensor modalities (camera, LiDAR, radar). In these approaches, mainly single frames are considered. In this work, a deep learning based method will be conceptualized, implemented and validated to co-use the temporal dependency between frames. Both supervised and self-supervised approaches are to be considered.

For validation, both real world-data and the department's own hardware-in-the-loop (HiL) simulator are to be used.

The following work packages are part of the thesis:

  • Literature research on existing concepts in research and industry
  • Evaluation and classification of open-source algorithms
  • Conceptual design of a deep learning-based algorithm for time-dependent depth estimation
  • Adaptation to the sensor concept of the research vehicle EDGAR
  • Integration of the developed software into the overall software
  • Documentation and visualization of the results

 

What we offer:

  • A highly motivated team of research associates and students pursuing the common goal of full-stack autonomous driving
  • Work with state-of-the-art hardware: HiL simulator, research vehicle EDDIE, cloud compute power
  • Work with state-of-the-art software tooling: ROS2, Docker, CI/CD
  • Build industry-relevant knowledge and software engineering skills

 

Voraussetzungen
  • Highly motivated teamplayer
  • Motivation to familiarize yourself with new topics
  • Ideally programming experience (C++, Python, Git, ROS2)
  • Ideally previous knowledge in the field of autonomous driving or sensor technology
Verwendete Technologien
Git, C++, Python, ROS2, Docker
Tags
FTM Studienarbeit, FTM AV, FTM AV Perception, FTM Sauerbeck, FTM Informatik
Möglicher Beginn
sofort
Kontakt
Florian Sauerbeck, M.Sc.
Raum: MW 3508
Tel.: +49 89 289 15342
sauerbeckftm.mw.tum.de
Ausschreibung