Refine
Departments, institutes and facilities
Document Type
- Conference Object (60)
- Article (11)
- Preprint (7)
- Part of a Book (3)
- Report (3)
- Dataset (2)
Year of publication
Keywords
- Automatic Short Answer Grading (3)
- Object detectors (3)
- Saliency methods (3)
- Cognitive robot control (2)
- Drosophila (2)
- Explainable robotics (2)
- Learning from experience (2)
- Navigation (2)
- Sanity checks (2)
- robot execution failures (2)
Swedish wheeled mobile robots have remarkable mobility properties allowing them to rotate and translate at the same time. Being holonomic systems, their kinematics model results in the possibility of designing separate and independent position and heading trajectory tracking control laws. Nevertheless, if these control laws should be implemented in the presence of unaccounted actuator saturation, the resulting saturated linear and angular velocity commands could interfere with each other thus dramatically affecting the overall expected performance. Based on Lyapunov’s direct method, a position and heading trajectory tracking control law for Swedish wheeled robots is developed. It explicitly accounts for actuator saturation by using ideas from a prioritized task based control framework.
With regard to performance well established SW-only design methodologies proceed by making the initial specification run first, then by enhancing its functionality and finally by optimizing it. When designing Embedded Systems (EbS) this approach is not viable since decisive design decisions like e.g. the estimation of required processing power or the identification of those parts of the specification which need to be delegated to dedicated HW depend on the fastness and fairness of the initial specification. We here propose a sequence of optimization steps embedded into the design flow, which enables a structured way to accelerate a given working EbS specification at different layers. This sequence of accelerations comprises algorithm selection, algorithm transformation, data transformation, implementation optimization and finally HW acceleration. It is analyzed how all acceleration steps are influenced by the specific attributes of the underlying EbS. The overall acceleration procedure is explained and quantified at hand of a real-life industrial example.
The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
Unexpected Situations in Service Robot Environment: Classification and Reasoning Using Naive Physics
(2014)
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context we focus in particular on some specific faults, which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to unexpected situations, which occur while interacting. These situations occur as deviations of properties of the objects (manipulated by the robot) from their expected values. Hence, they are experienced by the robot as external faults.
CASTLE is a co-design platform developed at GMD SET institute. It provides a number of design tools for configuring application specific design flows. This paper presents a walk through the CASTLE co-design environment, following the design flow of a video processing system. The design methodology and the tool usage for this real life example are described, as seen from a designers point of view. The design flow starts with a C/C++ program and gradually derives a register-transfer level description of a processor hardware, as well as the corresponding compiler for generating the processor opcode. The main results of each design step are presented and the usage of the CASTLE tools at each step is explained.
Co-design is concerned with the joint design of hardware and software making up an embedded computer system [Wol94]. A top down design flow for an embedded system begins with a system specification. If it is executable, it may be used for simulation, system verification or to identify algorithmical bottlenecks. In contrast to other chapters of this book, the specification is not developed in this case study, rather it is given from the beginning. Furthermore we are not concerned with partitioning or synthesis of dedicated HW. Instead we focus on the problem how to find an off-the-shelf micro-controller which implements the desired functionality and meets all specification constraints. If feasible, this is usually much cheaper then using dedicated hardware. This chapter will answer the question of feasibility for a real life problem from automobile industry.
A way of combining a relatively new sensor-technology, that is optical analog VLSI devices, with a standard digital omni-directional vision system is investigated. The sensor used is a neuromorphic analog VLSI sensor that estimates the global visual image motion. The sensor provides two analog output voltages that represent the components of the global optical flow vector. The readout is guided by an omni-directional mirror that maps the location of the ball and directs the robot to align its position so that a sensor-actuator module that includes the analog VLSI optical flow sensor can be activated. The purpose of the sensor-actuator module is to operate with a higher update rate than the standard vision system and thus increase the reactivity of the robot for very specific situations. This paper will demonstrate an application example where the robot is a goalkeeper with the task of defending the goal during a penalty kick.
Ein gebräuchliche Methodik beim Entwurf eingebetteter Systeme, in Anwendung besonders bei kleinen- und mittleren Unternehmen, geht folgendermaßen vor: Man nehme das bereits existierende Mikrokontroller Entwicklungspaket und bereits vorhandene Funktionen aus einer alten Systemrealisierung, variiere bzw. passe sie an die neue Aufgabe an und teste dann durch Emulation, ob die Spezifikation erfüllt ist.
Robots, which are able to carry out their tasks robustly in real world environments, are not only desirable but necessary if we want them to be more welcome for a wider audience. But very often they may fail to execute their actions successfully because of insufficient information about behaviour of objects used in the actions.
The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed.
Improving Robustness of Task Execution Against External Faults Using Simulation Based Approach
(2013)
Robots interacting in complex and cluttered environments may face unexpected situations referred to as external faults which prohibit the successful completion of their tasks. In order to function in a more robust manner, robots need to recognise these faults and learn how to deal with them in the future. We present a simulation-based technique to avoid external faults occurring during execusion releasing actions of a robot. Our technique utilizes simulation to generate a set of labeled examples which are used by a histogram algorithm to compute a safe region. A safe region consists of a set of releasing states of an object that correspond to successful performances of the action. This technique also suggests a general solution to avoid the occurrence of external faults for not only the current, observable object but also for any other object of the same shape but different size.
This project investigated the viability of using the Microsoft Kinect in order to obtain reliable Red-Green-Blue-Depth (RGBD) information. This explored the usability of the Kinect in a variety of environments as well as its ability to detect different classes of materials and objects. This was facilitated through the implementation of Random Sample and Consensus (RANSAC) based algorithms and highly parallelized workflows in order to provide time sensitive results. We found that the Kinect provides detailed and reliable information in a time sensitive manner. Furthermore, the project results recommend usability and operational parameters for the use of the Kinect as a scientific research tool.
In the design of robot skills, the focus generally lies on increasing the flexibility and reliability of the robot execution process; however, typical skill representations are not designed for analysing execution failures if they occur or for explicitly learning from failures. In this paper, we describe a learning-based hybrid representation for skill parameterisation called an execution model, which considers execution failures to be a natural part of the execution process. We then (i) demonstrate how execution contexts can be included in execution models, (ii) introduce a technique for generalising models between object categories by combining generalisation attempts performed by a robot with knowledge about object similarities represented in an ontology, and (iii) describe a procedure that uses an execution model for identifying a likely hypothesis of a parameterisation failure. The feasibility of the proposed methods is evaluated in multiple experiments performed with a physical robot in the context of handle grasping, object grasping, and object pulling. The experimental results suggest that execution models contribute towards avoiding execution failures, but also represent a first step towards more introspective robots that are able to analyse some of their execution failures in an explicit manner.
The increasing complexity of tasks that are required to be executed by robots demands higher reliability of robotic platforms. For this, it is crucial for robot developers to consider fault diagnosis. In this study, a general non-intrusive fault diagnosis system for robotic platforms is proposed. A mini-PC is non-intrusively attached to a robot that is used to detect and diagnose faults. The health data and diagnosis produced by the mini-PC is then standardized and transmitted to a remote-PC. A storage device is also attached to the mini-PC for data logging of health data in case of loss of communication with the remote-PC. In this study, a hybrid fault diagnosis method is compared to consistency-based diagnosis (CBD), and CBD is selected to be deployed on the system. The proposed system is modular and can be deployed on different robotic platforms with minimum setup.
Data-Driven Robot Fault Detection and Diagnosis Using Generative Models: A Modified SFDD Algorithm
(2019)
This paper presents a modification of the data-driven sensor-based fault detection and diagnosis (SFDD) algorithm for online robot monitoring. Our version of the algorithm uses a collection of generative models, in particular restricted Boltzmann machines, each of which represents the distribution of sliding window correlations between a pair of correlated measurements. We use such models in a residual generation scheme, where high residuals generate conflict sets that are then used in a subsequent diagnosis step. As a proof of concept, the framework is evaluated on a mobile logistics robot for the problem of recognising disconnected wheels, such that the evaluation demonstrates the feasibility of the framework (on the faulty data set, the models obtained 88.6% precision and 75.6% recall rates), but also shows that the monitoring results are influenced by the choice of distribution model and the model parameters as a whole.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution.
In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model.
In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution. In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model. In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e., from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
Abschlussbericht zum BMBF-Fördervorhaben Enabling Infrastructure for HPC-Applications (EI-HPC)
(2020)
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer Grading
(2020)
Automatic Short Answer Grading (ASAG) is the process of grading the student answers by computational approaches given a question and the desired answer. Previous works implemented the methods of concept mapping, facet mapping, and some used the conventional word embeddings for extracting semantic features. They extracted multiple features manually to train on the corresponding datasets. We use pretrained embeddings of the transfer learning models, ELMo, BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a single feature, cosine similarity, extracted from the embeddings of these models. We compare the RMSE scores and correlation measurements of the four models with previous works on Mohler dataset. Our work demonstrates that ELMo outperformed the other three models. We also, briefly describe the four transfer learning models and conclude with the possible causes of poor results of transfer learning models.
When a robotic agent experiences a failure while acting in the world, it should be possible to discover why that failure has occurred, namely to diagnose the failure. In this paper, we argue that the diagnosability of robot actions, at least in a classical sense, is a feature that cannot be taken for granted since it strongly depends on the underlying action representation. We specifically define criteria that determine the diagnosability of robot actions. The diagnosability question is then analysed in the context of a handle manipulation action, such that we discuss two different representations of the action – a composite policy with a learned success model for the action parameters, and a neural network-based monolithic policy – both of which exist on different sides of the diagnosability spectrum. Through this comparison, we conclude that composite actions are more suited to explicit diagnosis, but representations with less prior knowledge are more flexible. This suggests that model learning may provide balance between flexibility and diagnosability; however, data-driven diagnosis methods also need to be enhanced in order to deal with the complexity of modern robots.
Efficient and comprehensive assessment of students knowledge is an imperative task in any learning process. Short answer grading is one of the most successful methods in assessing the knowledge of students. Many supervised learning and deep learning approaches have been used to automate the task of short answer grading in the past. We investigate why assistive grading with active learning would be the next logical step in this task as there is no absolute ground truth answer for any question and the task is very subjective in nature. We present a fast and easy method to harness the power of active learning and natural language processing in assisting the task of grading short answer questions. A webbased GUI is designed and implemented to incorporate an interactive short answer grading system. The experiments show that active learning saves the time and effort of graders in assessment and reaches the performance of supervised learning with less amount of graded answers for training.
When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot’s own experiences. We verify our algorithm for two actions – grasping and stowing everyday objects – such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
Representation and Experience-Based Learning of Explainable Models for Robot Action Execution
(2021)
For robots acting in human-centered environments, the ability to improve based on experience is essential for reliable and adaptive operation; however, particularly in the context of robot failure analysis, experience-based improvement is only useful if robots are also able to reason about and explain the decisions they make during execution. In this paper, we describe and analyse a representation of execution-specific knowledge that combines (i) a relational model in the form of qualitative attributes that describe the conditions under which actions can be executed successfully and (ii) a continuous model in the form of a Gaussian process that can be used for generating parameters for action execution, but also for evaluating the expected execution success given a particular action parameterisation. The proposed representation is based on prior, modelled knowledge about actions and is combined with a learning process that is supervised by a teacher. We analyse the benefits of this representation in the context of two actions – grasping handles and pulling an object on a table – such that the experiments demonstrate that the joint relational-continuous model allows a robot to improve its execution based on experience, while reducing the severity of failures experienced during execution.
Tell Your Robot What To Do: Evaluation of Natural Language Models for Robot Command Processing
(2019)
The use of natural language to indicate robot tasks is a convenient way to command robots. As a result, several models and approaches capable of understanding robot commands have been developed, which however complicates the choice of a suitable model for a given scenario. In this work, we present a comparative analysis and benchmarking of four natural language understanding models - Mbot, Rasa, LU4R, and ECG. We particularly evaluate the performance of the models to understand domestic service robot commands by recognizing the actions and any complementary information in them in three use cases: the RoboCup@Home General Purpose Service Robot (GPSR) category 1 contest, GPSR category 2, and hospital logistics in the context of the ROPOD project.
When developing robot functionalities, finite state machines are commonly used due to their straightforward semantics and simple implementation. State machines are also a natural implementation choice when designing robot experiments, as they generally lead to reproducible program execution. In practice, the implementation of state machines can lead to significant code repetition and may necessitate unnecessary code interaction when reparameterisation is required. In this paper, we present a small Python library that allows state machines to be specified, configured, and dynamically created using a minimal domain-specific language. We illustrate the use of the library in three different use cases - scenario definition in the context of the RoboCup@Home competition, experiment design in the context of the ROPOD project, as well as specification transfer between robots.
For robots acting - and failing - in everyday environments, a predictable behaviour representation is important so that it can be utilised for failure analysis, recovery, and subsequent improvement. Learning from demonstration combined with dynamic motion primitives is one commonly used technique for creating models that are easy to analyse and interpret; however, mobile manipulators complicate such models since they need the ability to synchronise arm and base motions for performing purposeful tasks. In this paper, we analyse dynamic motion primitives in the context of a mobile manipulator - a Toyota Human Support Robot (HSR)- and introduce a small extension of dynamic motion primitives that makes it possible to perform whole body motion with a mobile manipulator. We then present an extensive set of experiments in which our robot was grasping various everyday objects in a domestic environment, where a sequence of object detection, pose estimation, and manipulation was required for successfully completing the task. Our experiments demonstrate the feasibility of the proposed whole body motion framework for everyday object manipulation, but also illustrate the necessity for highly adaptive manipulation strategies that make better use of a robot's perceptual capabilities.