Refine
H-BRS Bibliography
- yes (20)
Departments, institutes and facilities
Document Type
- Conference Object (16)
- Research Data (2)
- Part of a Book (1)
- Preprint (1)
Language
- English (20)
Keywords
- Assistive robots (1)
- Benchmarking (1)
- Coloured pointclouds (1)
- FOS: Computer and information sciences (1)
- Machine learning (1)
- Object recognition (1)
- RGB-D data (1)
- Robot benchmarking (1)
- Robotics (cs.RO) (1)
- Robotics competitions (1)
The dataset contains the following data from successful and failed executions of the Toyota HSR robot placing a book on a shelf.
RGB images from the robot's head camera
Depth images from the robot's head camera
Rendered images of the robot's 3D model from the point of view of the robot's head camera
Force-torque readings from a wrist-mounted force-torque sensor
Joint efforts, velocities and positions
extrinsic and intrinsic camera calibration parameters
frame-level anomaly annotations
The anomalies that occur during execution include:
the manipulated book falling down
books on the shelf being disturbed significantly
camera occlusions
robot being disturbed by an external collision
The dataset is split into a train, validation and test set with the following number of trials:
Train: 48 successful trials
Validation: 6 successful trials
Test: 60 anomalous trials and 7 successful trials
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
Execution monitoring is essential for robots to detect and respond to failures. Since it is impossible to enumerate all failures for a given task, we learn from successful executions of the task to detect visual anomalies during runtime. Our method learns to predict the motions that occur during the nominal execution of a task, including camera and robot body motion. A probabilistic U-Net architecture is used to learn to predict optical flow, and the robot's kinematics and 3D model are used to model camera and body motion. The errors between the observed and predicted motion are used to calculate an anomaly score. We evaluate our method on a dataset of a robot placing a book on a shelf, which includes anomalies such as falling books, camera occlusions, and robot disturbances. We find that modeling camera and body motion, in addition to the learning-based optical flow prediction, results in an improvement of the area under the receiver operating characteristic curve from 0.752 to 0.804, and the area under the precision-recall curve from 0.467 to 0.549.
This paper presents the b-it-bots@Home team and its mobile service robot called Jenny – a service robot based on the Care-O-bot 3 platform manufactured by the Fraunhofer Institute for Manufacturing Engineering and Automation. In this paper, an overview of the robot control architecture and its capabilities is presented. The capabilities refers to the added functionalities from research and projects carried out within the Bonn-Rhein-Sieg University of Applied Science.
Robot deployment in realistic environments is challenging despite the fact that robots can be quite skilled at a large number of isolated tasks. One reason for this is that robots are rarely equipped with powerful introspection capabilities, which means that they cannot always deal with failures in an acceptable manner; in addition, manual diagnosis is often a tedious task that requires technicians to have a considerable set of robotics skills. In this paper, we discuss our ongoing efforts to address some of these problems. In particular, we (i) present our early efforts at developing a robotic black box and consider some factors that complicate its design, (ii) explain our component and system monitoring concept, and (iii) describe the necessity for remote monitoring and experimentation as well as our initial attempts at performing those. Our preliminary work opens a range of promising directions for making robots more usable and reliable in practice.
Robot deployment in realistic dynamic environments is a challenging problem despite the fact that robots can be quite skilled at a large number of isolated tasks. One reason for this is that robots are rarely equipped with powerful introspection capabilities, which means that they cannot always deal with failures in a reasonable manner; in addition, manual diagnosis is often a tedious task that requires technicians to have a considerable set of robotics skills.
While executing actions, service robots may experience external faults because of insufficient knowledge about the actions' preconditions. The possibility of encountering such faults can be minimised if symbolic and geometric precondition models are combined into a representation that specifies how and where actions should be executed. This work investigates the problem of learning such action execution models and the manner in which those models can be generalised. In particular, we develop a template-based representation of execution models, which we call delta models, and describe how symbolic template representations and geometric success probability distributions can be combined for generalising the templates beyond the problem instances on which they are created. Our experimental analysis, which is performed with two physical robot platforms, shows that delta models can describe execution-specific knowledge reliably, thus serving as a viable model for avoiding the occurrence of external faults.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.