006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (22)
- Institute of Visual Computing (IVC) (10)
- Institut für Sicherheitsforschung (ISF) (4)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (4)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Fachbereich Wirtschaftswissenschaften (2)
- Institut für KI und Autonome Systeme (A2S) (2)
- Fachbereich Angewandte Naturwissenschaften (1)
- Institut für Cyber Security & Privacy (ICSP) (1)
- Institut für Verbraucherinformatik (IVI) (1)
Document Type
- Conference Object (35) (remove)
Year of publication
Language
- English (35) (remove)
Has Fulltext
- no (35) (remove)
Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture
(2019)
In order to establish long-term relationships with users, social companion robots and their behaviors need to be comprehensible. Purely reactive behavior such as answering questions or following commands can be readily interpreted by users. However, the robot's proactive behaviors, included in order to increase liveliness and improve the user experience, often raise a need for explanation. In this paper, we provide a concept to produce accessible “why-explanations” for the goal-directed behavior an autonomous, lively robot might produce. To this end we present an architecture that provides reasons for behaviors in terms of comprehensible needs and strategies of the robot, and we propose a model for generating different kinds of explanations.
Towards explaining deep learning networks to distinguish facial expressions of pain and emotions
(2018)
Deep learning networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep learning methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI method Layer-wise Relevance Propagation (LRP) and apply it to explain how a deep learning network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.
Towards an Interaction-Centered and Dynamically Constructed Episodic Memory for Social Robots
(2020)
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
In this paper, we introduce an optical sensor system, which is integrated into an industrial push-button. The sensor allows to classify the type of material that is in contact with the button when pressed into different material categories on the basis of the material's so called "spectral signature". An approach for a safety sensor system at circular table saws on the same base has been introduced previously on SIAS-2007. This contactless working sensor is able to distinguish reliably between skin, textiles, leather and various other kinds of materials. A typical application for this intelligent push-button is the use at possibly dangerous machines, whose operating instructions include either the prohibition or the obligation to wear gloves during the work at the machine. An exemple of machines at which no gloves are allowed are pillar drilling machines, because of the risk of getting caught in the drill chuck and being turned in by the machine. In many cases this causes very serious hand injuries. Depending on the application needs, the sensor system integrated into the push-button can be configured flexibly by software to prevent the operator from accidentally starting a machine with or without gloves, which can decrease the risk of severe accidents significantly. Especially two-hand controls are incentive to manipulation for easier handling. By equipping both push-buttons of a two-hand control with material classification properties, the user is forced to operate the controls with his bare fingers. That limitation disallows the manipulation of a two-hand control by a simple rodding device.
Taste is a complex phenomenon that depends on the individual experience and is a matter of collective negotiation and mediation. On the contrary, it is uncommon to include taste and its many facets in everyday design, particularly online shopping for fresh food products. To realize this unused potential, we conducted two Co-Design workshops. Based on the participants’ results in the workshops, we prototyped and evaluated a click-dummy smart-phone app to explore consumers’ needs for digital taste depiction. We found that emphasizing the natural qualities of food products, external reviews, and personalizing features lead to a reflection on the individual taste experience. The self-reflection through our design enables consumers to develop their taste competencies and thus strengthen their autonomy in decision-making. Ultimately, exploring taste as a social experience adds to a broader understanding of taste beyond a sensory phenomenon.
Low-Cost In-Hand Slippage Detection and Avoidance for Robust Robotic Grasping with Compliant Fingers
(2021)
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.