Volltext-Downloads (blau) und Frontdoor-Views (grau)

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

  • Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.

Download full text files

Export metadata

Additional Services

Search Google Scholar Check availability


Show usage statistics
Document Type:Part of a Book
Author:Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Parent Title (English):Fingscheidt, Gottschalk et al. (Hg.): Deep Neural Networks and Data for Automated Driving. Robustness, Uncertainty Quantification, and Insights Towards Safety
Number of pages:76
First Page:3
Last Page:78
Publisher:Springer International Publishing AG
Place of publication:Cham
Publishing Institution:Hochschule Bonn-Rhein-Sieg
Date of first publication:2022/06/18
Copyright:© The Author(s) 2022. This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License
Funding:The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project “Methoden und Maßnahmen zur Absicherung von KI-basierten Wahrnehmungsfunktionen für das automatisierte Fahren (KI Absicherung)”. The authors would like to thank the consortium for the successful cooperation.
Departments, institutes and facilities:Fachbereich Informatik
Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE)
Dewey Decimal Classification (DDC):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 005 Computerprogrammierung, Programme, Daten
Entry in this database:2023/01/02
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International