Refine
Departments, institutes and facilities
Document Type
- Conference Object (50)
- Article (8)
- Preprint (2)
- Part of a Book (1)
- Doctoral Thesis (1)
Year of publication
Language
- English (62)
Keywords
- Robotics (4)
- Machine Learning (2)
- knowledge learning (2)
- neural networks (2)
- virtual reality (2)
- visualization (2)
- Agent-oriented software engineering (1)
- Artificial Intelligence (1)
- Assistive robots (1)
- Autonomous Systems (1)
RPSL meets lightning: A model-based approach to design space exploration of robot perception systems
(2017)
Exploring Gridmap-based Interfaces for the Remote Control of UAVs under Bandwidth Limitations
(2017)
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.