Refine
H-BRS Bibliography
- yes (1879) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (822)
- Fachbereich Angewandte Naturwissenschaften (446)
- Fachbereich Ingenieurwissenschaften und Kommunikation (303)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (274)
- Institute of Visual Computing (IVC) (252)
- Fachbereich Wirtschaftswissenschaften (204)
- Institut für funktionale Gen-Analytik (IFGA) (159)
- Institut für Cyber Security & Privacy (ICSP) (83)
- Institut für Verbraucherinformatik (IVI) (79)
- Graduierteninstitut (51)
Document Type
- Conference Object (831)
- Article (631)
- Part of a Book (143)
- Preprint (68)
- Doctoral Thesis (52)
- Book (monograph, edited volume) (41)
- Research Data (22)
- Master's Thesis (22)
- Report (17)
- Conference Proceedings (16)
Year of publication
Language
- English (1879) (remove)
Has Fulltext
- no (1879) (remove)
Keywords
- Robotics (13)
- FPGA (12)
- Virtual Reality (12)
- Machine Learning (9)
- GC/MS (8)
- ICT (8)
- Quality diversity (8)
- virtual reality (8)
- Performance (7)
- 3D user interface (6)
The work presented in this paper focuses on the comparison of well-known and new techniques for designing robust fault diagnosis schemes in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well.
In the realm of service robots recovery from faults is indispensable to foster user acceptance. Here fault is to be understood not in the sense of robot internal, rather as interaction faults while situated in and interacting with an environment (aka ex-ternal faults). We reason along the most frequent failures in typical scenarios which we observed during real-world demonstrations and competitions using our Care-O-bot III 1 robot. They take place in an apartment-like environments which is known as closed world. We suggest four different -for now adhoc -fault categories caused by disturbances, imperfect per-ception, inadequate planning or chaining of action sequences. The fault are categorized and then mapped to a handful of partly known, partly extended fault handling techniques. Among them we applied qualitative reasoning, use of simu-lation as oracle, learning for planning (aka en-hancement of plan operators) or -in future -case-based reasoning. Having laid out this frame we mainly ask open questions related to the applicability of the pre-sented approach. Amongst them: how to find new categories, how to extend them, how to as-sure disjointness, how to identify old and label new faults on the fly.
This paper presents an approach to estimate theego-motion of a robot while moving. The employed sensor is aTime-of-Flight (ToF) camera, the SR3000 from Mesa Imaging.ToF cameras provide depth and reflectance data of the scene athigh frame rates.The proposed method utilizes the coherence of depth andreflectance data of ToF cameras by detecting image features onreflectance data and estimating the motion on depth data. Themotion estimate of the camera is fused with inertial measure-ments to gain higher accuracy and robustness.The result of the algorithm is benchmarked against referenceposes determined by matching accurate 2D range scans. Theevaluation shows that fusing the pose estimate with the datafromthe IMU improves the accuracy and robustness of the motionestimate against distorted measurements from the sensor.
This work presents a person independent pointing gesture recognition application. It uses simple but effective features for the robust tracking of the head and the hand of the user in an undefined environment. The application is able to detect if the tracking is lost and can be reinitialized automatically. The pointing gesture recognition accuracy is improved by the proposed fingertip detection algorithm and by the detection of the width of the face. The experimental evaluation with eight different subjects shows that the overall average pointing gesture recognition rate of the system for distances up to 250 cm (head to pointing target) is 86.63% (with a distance between objects of 23 cm). Considering just frontal pointing gestures for distances up to 250 cm the gesture recognition rate is 90.97% and for distances up to 194 cm even 95.31%. The average error angle is 7.28◦.
Understanding the Internet of Things: A Conceptualisation of Business-to-Thing (B2T) Interactions
(2015)