Refine
Departments, institutes and facilities
Document Type
- Conference Object (57)
- Article (9)
- Preprint (4)
- Part of a Book (2)
- Research Data (1)
- Doctoral Thesis (1)
Year of publication
Keywords
- Robotics (5)
- Machine Learning (2)
- knowledge learning (2)
- neural networks (2)
- virtual reality (2)
- visualization (2)
- Agent-oriented software engineering (1)
- Artificial Intelligence (1)
- Assistive robots (1)
- Autonomous Systems (1)
OpCog: an industrial development approach for cognitive agent systems in military UAV applications
(2008)
Graph databases employ graph structures such as nodes, attributes and edges to model and store relationships among data. To access this data, graph query languages (GQL) such as Cypher are typically used, which might be difficult to master for end-users. In the context of relational databases, sequence to SQL models, which translate natural language questions to SQL queries, have been proposed. While these Neural Machine Translation (NMT) models increase the accessibility of relational databases, NMT models for graph databases are not yet available mainly due to the lack of suitable parallel training data. In this short paper we sketch an architecture which enables the generation of synthetic training data for the graph query language Cypher.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
Property-Based Testing in Simulation for Verifying Robot Action Execution in Tabletop Manipulation
(2021)
An important prerequisite for the reliability and robustness of a service robot is ensuring the robot’s correct behavior when it performs various tasks of interest. Extensive testing is one established approach for ensuring behavioural correctness; this becomes even more important with the integration of learning-based methods into robot software architectures, as there are often no theoretical guarantees about the performance of such methods in varying scenarios. In this paper, we aim towards evaluating the correctness of robot behaviors in tabletop manipulation through automatic generation of simulated test scenarios in which a robot assesses its performance using property-based testing. In particular, key properties of interest for various robot actions are encoded in an action ontology and are then verified and validated within a simulated environment. We evaluate our framework with a Toyota Human Support Robot (HSR) which is tested in a Gazebo simulation. We show that our framework can correctly and consistently identify various failed actions in a variety of randomised tabletop manipulation scenarios, in addition to providing deeper insights into the type and location of failures for each designed property.
The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Exploring Gridmap-based Interfaces for the Remote Control of UAVs under Bandwidth Limitations
(2017)
RPSL meets lightning: A model-based approach to design space exploration of robot perception systems
(2017)
We present a universal modular robot architecture. A robot consists of the following intelligent modules: central control unit (CCU), drive, actuators, a vision unit and sensor input unit. Software and hardware of the robot fit into this structure. We define generic interface protocols between these units. If the robot has to solve a new application and is equipped with a different drive, new actuators and different sensors, only the program for the new application has to be loaded into the CCU. The interfaces to the drive, the vision unit and the other sensors are plug-and-play interfaces. The only constraint for the CCU-program is the set of commands for the actuators.
Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment
(2015)
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.