TY - CHAP U1 - Konferenzveröffentlichung A1 - Youssef, Youssef Mahmoud A1 - Koine, Linda A1 - Müller, Martin E. T1 - Inducing Explainable Rules about Distributed Robotic Systems for Fault Detection & Diagnosis T2 - 30th International Workshop on Principles of Diagnosis DX'19, November 11-13, 2019, Klagenfurt, Austria N2 - This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult. To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions. This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis. SP - 9 S1 - 9 ER -