Transferability evaluation of camera-based object detection models for autonomous driving

Lehrstuhl für Fahrzeugtechnik
Semesterarbeit /
experimentell /  

One of the strongest assumptions when training deep learning models, is that training and test data are sampled from the same distribution. However, in a real-world scenario that is rarely the case. Specifically, in computer vision, if a model is trained only with one dataset, its performance tends to worsen when evaluated with a different dataset. The goal of this project is to quantify this phenomenon specifically for the object detection case, taking advantage of all the public autonomous driving datasets.

The first step consists of a literature research regarding both the State-of-the-art object detection architectures and autonomous driving datasets. Then, the most balanced models regarding necessary data quantity, training and inference time and performance metrics should be selected. Those models will be trained and tested with each of the datasets in all the possible combinations and finally the metrics will be compared. The result should be an assessment on both how robust are the architectures to a dataset shift and which datasets achieve more generalized models across the architectures

Work packages:

  • Literature research on camera-based object detection and autonomous driving datasets
  • Implementation, training and testing of selected object-detection architectures
  • Validation of the performance with different datasets
  • Quantitative and qualitative analysis of transferability between datasets and detection architectures


  • Programming experience in Python or C++.
  • Experience with Pytorch or Tensorflow
  • Basic knowledge of deep learning and computer vision
  • Nice to have: Experience with ROS1 or ROS2
FTM Studienarbeit, FTM AV, FTM AV Perception, FTM Rivera, FTM Informatik
Möglicher Beginn
Esteban Rivera, M.Sc.
Raum: MW 3508