006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (43)
- Institute of Visual Computing (IVC) (17)
- Institut für Verbraucherinformatik (IVI) (14)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (2)
- Zentrum für Ethik und Verantwortung (ZEV) (2)
Document Type
- Conference Object (41)
- Article (20)
- Part of a Book (7)
- Report (5)
- Contribution to a Periodical (4)
- Doctoral Thesis (4)
- Preprint (4)
- Book (monograph, edited volume) (2)
- Research Data (2)
- Patent (1)
Year of publication
Has Fulltext
- no (90) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (3)
- Machine learning (3)
- deep learning (3)
- facial expression analysis (3)
- Automatic pain detection (2)
- Dementia (2)
- Explainable artificial intelligence (2)
- Robotics (2)
- Virtual Reality (2)
- emotion recognition (2)
- guidance (2)
- pain recognition (2)
- 3D user interface (1)
- 450 MHz (1)
- Action Unit detection (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
- Aneignungsstudie (1)
- Applications in Energy Transport (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Bioinformatics (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Case study (1)
- Classifiers (1)
- Collaborating industrial robots (1)
- Community of Practice (1)
- Complex Systems Modeling and Simulation (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Data management (1)
- Data sharing (1)
- Datenanalyse (1)
- Demenz (1)
- Design (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Exergame (1)
- Exergames (1)
- Experten (1)
- Facial Emotion Recognition (1)
- Facial expression (1)
- Fall prevention (1)
- Fallbeschreibung (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Future-oriented business models (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Gaussian state estimation (1)
- Geschäftsprozess (1)
- HCI (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- ICT Design (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Intermittent faults (1)
- Kalman filter (1)
- Kinect (1)
- Knowledge Graphs (1)
- Kollektiventscheidung (1)
- Komplexitätstheorie (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Living Lab (1)
- Locomotion (1)
- MQTT (1)
- Mathematical methods (1)
- Microgravity (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- NIR-point sensor (1)
- Natural Language Processing (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Non-linear systems (1)
- Object-Based Image Analysis (OBIA) (1)
- Older adults (1)
- Out-of-view Objects (1)
- Pain diagnostics (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- RapidMiner (1)
- Ray tracing (1)
- Reasoning (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Review (1)
- Robust grasping (1)
- Serious Games (1)
- Skin detection (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Social-Choice-Theorie (1)
- Spieltheorie (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Taste (1)
- Technologie (1)
- Traffic Simulations (1)
- Transformers (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- User Experience (1)
- User centered design (1)
- User feedback (1)
- User-Centered Design (1)
- Videogame (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- Wearables (1)
- Wissensaustausch (1)
- action unit recognition (1)
- aerodynamics (1)
- appraisal theory (1)
- audio-tactile feedback (1)
- authoring tools (1)
- autonomous explanation generation (1)
- brightfield microscopy (1)
- co-design (1)
- component analyses (1)
- depth perception (1)
- distraction detection (1)
- driver distraction (1)
- dynamic vector fields (1)
- emotion inference (1)
- explainability (1)
- facial action units (1)
- facial expressions (1)
- facial expressions of pain (1)
- flight zone (1)
- geofence (1)
- haptics (1)
- human-robot interaction (HRI) (1)
- image fusion (1)
- information fusion (1)
- interaction architecture (1)
- interpretability (1)
- mental state analysis (1)
- multisensory (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- pain datasets (1)
- pain detection (1)
- pain feature representation (1)
- pansharpening (1)
- path tracing (1)
- prototyping (1)
- real-time (1)
- remote sensing (1)
- self-motion perception (1)
- sensors (1)
- sensory perception (1)
- social robots (1)
- socio-interactive explanation generation (1)
- state constraints (1)
- state estimation (1)
- survey (1)
- transparency (1)
- user-centered explanation generation (1)
- vection (1)
- virtual reality (1)
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
This paper addresses the classification of Arabic text data in the field of Natural Language Processing (NLP), with a particular focus on Natural Language Inference (NLI) and Contradiction Detection (CD). Arabic is considered a resource-poor language, meaning that there are few data sets available, which leads to limited availability of NLP methods. To overcome this limitation, we create a dedicated data set from publicly available resources. Subsequently, transformer-based machine learning models are being trained and evaluated. We find that a language-specific model (AraBERT) performs competitively with state-of-the-art multilingual approaches, when we apply linguistically informed pre-training methods such as Named Entity Recognition (NER). To our knowledge, this is the first large-scale evaluation for this task in Arabic, as well as the first application of multi-task pre-training in this context.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
The increasing ubiquity of Artificial Intelligence (AI) poses significant political consequences. The rapid proliferation of AI over the past decade has prompted legislators and regulators to attempt to contain AI’s technological consequences. For Germany, relevant design requirements have been expressed by the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), and, at the national level, by the German government’s Data Ethics Commission (DEK) as well as the German Bundestag’s Commission of Inquiry on Artificial Intelligence (EKKI).
Wie KI Innere Führung lernt
(2022)
Dass sich künstliche Intelligenz (KI) weltweit ausgebreitet hat, ist eine Binsenwahrheit. Die rasche und unaufhaltsame Proliferation von KI der letzten zehn Jahre spricht für sich, und längst ziehen auch Gesetzgeber und Regulierungsbehörden nach, um KI und ihre Technikfolgen einzuhegen. Für Deutschland relevante Gestaltungsanforderungen haben die High-Level Expert Group on Artificial Intelligence der Europäischen Kommission (HLEG AI) und auf nationaler Ebene die Datenethikkommission der Bundesregierung (DEK) und die Enquetekommission Künstliche Intelligenz des Deutschen Bundestags (EKKI) geäußert.
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)." Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.
Dieses Dokument präsentiert eine Zusammenfassung der Dissertation der Autorin. In dieser Dissertation [Ha20] wurde ein neuartiger und hybrider Ansatz für die Scha ̈tzung der Intensität von Gesichtsmuskelbewegungen (Action Unit (AU)) vorgeschlagen und validiert. Dieser Ansatz basiert auf einer Gauß’schen Zustandsschätzung und kombiniert ein verformbares, AU-basiertes Gesichtsformmodell, ein viskoelastisches Modell der Gesichtsmuskelbewegung, mehrere erscheinungsbasierten AU-Klassifikatoren und eine Methode zur Erkennung von Gesichtspunkten. Es wurden mehrere Erweiterungen vorgeschlagen und in das Zustandsschätzungs-Framework integriert, um mit den personenspezifischen Eigenschaften sowie technischen und praktischen Herausforderungen umzugehen.Die mit der vorgeschlagenen Methode erzeugten AU-Intensitätsschätzungen wurden für die automatische Erkennung von Schmerzen und für die Analyse von Fahrerablenkung eingesetzt.
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.