Refine
Departments, institutes and facilities
Document Type
- Conference Object (10)
- Preprint (1)
Year of publication
Language
- English (11)
Has Fulltext
- no (11)
Keywords
- Bag of Features (2)
- Human-Robot Interaction (2)
- classifier combination (2)
- clustering (2)
- feature extraction (2)
- machine learning (2)
- object categorization (2)
- Artificial Intelligence (cs.AI) (1)
- Attention (1)
- CNN (1)
This paper presents the b-it-bots@Home team and its mobile service robot called Jenny – a service robot based on the Care-O-bot 3 platform manufactured by the Fraunhofer Institute for Manufacturing Engineering and Automation. In this paper, an overview of the robot control architecture and its capabilities is presented. The capabilities refers to the added functionalities from research and projects carried out within the Bonn-Rhein-Sieg University of Applied Science.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
TSEM: Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
(2022)
Deep learning has become a one-size-fits-all solution for technical and business domains thanks to its flexibility and adaptability. It is implemented using opaque models, which unfortunately undermines the outcome trustworthiness. In order to have a better understanding of the behavior of a system, particularly one driven by time series, a look inside a deep learning model so-called posthoc eXplainable Artificial Intelligence (XAI) approaches, is important. There are two major types of XAI for time series data, namely model-agnostic and model-specific. Model-specific approach is considered in this work. While other approaches employ either Class Activation Mapping (CAM) or Attention Mechanism, we merge the two strategies into a single system, simply called the Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series (TSEM). TSEM combines the capabilities of RNN and CNN models in such a way that RNN hidden units are employed as attention weights for the CNN feature maps temporal axis. The result shows that TSEM outperforms XCM. It is similar to STAM in terms of accuracy, while also satisfying a number of interpretability criteria, including causality, fidelity, and spatiotemporality.
This work presents a person independent pointing gesture recognition application. It uses simple but effective features for the robust tracking of the head and the hand of the user in an undefined environment. The application is able to detect if the tracking is lost and can be reinitialized automatically. The pointing gesture recognition accuracy is improved by the proposed fingertip detection algorithm and by the detection of the width of the face. The experimental evaluation with eight different subjects shows that the overall average pointing gesture recognition rate of the system for distances up to 250 cm (head to pointing target) is 86.63% (with a distance between objects of 23 cm). Considering just frontal pointing gestures for distances up to 250 cm the gesture recognition rate is 90.97% and for distances up to 194 cm even 95.31%. The average error angle is 7.28◦.