006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (43)
- Institute of Visual Computing (IVC) (17)
- Institut für Verbraucherinformatik (IVI) (14)
- Fachbereich Wirtschaftswissenschaften (13)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (4)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (2)
- Zentrum für Ethik und Verantwortung (ZEV) (2)
Document Type
- Conference Object (41)
- Article (20)
- Part of a Book (7)
- Report (5)
- Contribution to a Periodical (4)
- Doctoral Thesis (4)
- Preprint (4)
- Book (monograph, edited volume) (2)
- Research Data (2)
- Patent (1)
Year of publication
Has Fulltext
- no (90) (remove)
Keywords
- Augmented Reality (3)
- Machine Learning (3)
- Machine learning (3)
- deep learning (3)
- facial expression analysis (3)
- Automatic pain detection (2)
- Dementia (2)
- Explainable artificial intelligence (2)
- Robotics (2)
- Virtual Reality (2)
- emotion recognition (2)
- guidance (2)
- pain recognition (2)
- 3D user interface (1)
- 450 MHz (1)
- Action Unit detection (1)
- Agile software development (1)
- Algorithmik (1)
- Altenhilfe (1)
- Aneignungsstudie (1)
- Applications in Energy Transport (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Bioinformatics (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Case study (1)
- Classifiers (1)
- Collaborating industrial robots (1)
- Community of Practice (1)
- Complex Systems Modeling and Simulation (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Curriculum (1)
- Cybersickness (1)
- Data Fusion (1)
- Data management (1)
- Data sharing (1)
- Datenanalyse (1)
- Demenz (1)
- Design (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Disco (1)
- Distance Perception (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Embedded system (1)
- Emotion (1)
- Exergame (1)
- Exergames (1)
- Experten (1)
- Facial Emotion Recognition (1)
- Facial expression (1)
- Fall prevention (1)
- Fallbeschreibung (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Future-oriented business models (1)
- Fuzzy Mining (1)
- Games and Simulations for Learning (1)
- Gaussian state estimation (1)
- Geschäftsprozess (1)
- HCI (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- ICT Design (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Inductive Visual Mining (1)
- Information Security (1)
- Intermittent faults (1)
- Kalman filter (1)
- Kinect (1)
- Knowledge Graphs (1)
- Kollektiventscheidung (1)
- Komplexitätstheorie (1)
- LTE-M (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Living Lab (1)
- Locomotion (1)
- MQTT (1)
- Mathematical methods (1)
- Microgravity (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- NIR-point sensor (1)
- Natural Language Processing (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Non-linear systems (1)
- Object-Based Image Analysis (OBIA) (1)
- Older adults (1)
- Out-of-view Objects (1)
- Pain diagnostics (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- RapidMiner (1)
- Ray tracing (1)
- Reasoning (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Review (1)
- Robust grasping (1)
- Serious Games (1)
- Skin detection (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart Home (1)
- Smart InGaAs camera-system (1)
- Social-Choice-Theorie (1)
- Spieltheorie (1)
- Studenten (1)
- Studienverlauf (1)
- Survey (1)
- Taste (1)
- Technologie (1)
- Traffic Simulations (1)
- Transformers (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unterstützung (1)
- User Experience (1)
- User centered design (1)
- User feedback (1)
- User-Centered Design (1)
- Videogame (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- Wearables (1)
- Wissensaustausch (1)
- action unit recognition (1)
- aerodynamics (1)
- appraisal theory (1)
- audio-tactile feedback (1)
- authoring tools (1)
- autonomous explanation generation (1)
- brightfield microscopy (1)
- co-design (1)
- component analyses (1)
- depth perception (1)
- distraction detection (1)
- driver distraction (1)
- dynamic vector fields (1)
- emotion inference (1)
- explainability (1)
- facial action units (1)
- facial expressions (1)
- facial expressions of pain (1)
- flight zone (1)
- geofence (1)
- haptics (1)
- human-robot interaction (HRI) (1)
- image fusion (1)
- information fusion (1)
- interaction architecture (1)
- interpretability (1)
- mental state analysis (1)
- multisensory (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- pain datasets (1)
- pain detection (1)
- pain feature representation (1)
- pansharpening (1)
- path tracing (1)
- prototyping (1)
- real-time (1)
- remote sensing (1)
- self-motion perception (1)
- sensors (1)
- sensory perception (1)
- social robots (1)
- socio-interactive explanation generation (1)
- state constraints (1)
- state estimation (1)
- survey (1)
- transparency (1)
- user-centered explanation generation (1)
- vection (1)
- virtual reality (1)
Wie KI Innere Führung lernt
(2022)
Dass sich künstliche Intelligenz (KI) weltweit ausgebreitet hat, ist eine Binsenwahrheit. Die rasche und unaufhaltsame Proliferation von KI der letzten zehn Jahre spricht für sich, und längst ziehen auch Gesetzgeber und Regulierungsbehörden nach, um KI und ihre Technikfolgen einzuhegen. Für Deutschland relevante Gestaltungsanforderungen haben die High-Level Expert Group on Artificial Intelligence der Europäischen Kommission (HLEG AI) und auf nationaler Ebene die Datenethikkommission der Bundesregierung (DEK) und die Enquetekommission Künstliche Intelligenz des Deutschen Bundestags (EKKI) geäußert.
Vection underwater
(2022)
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
„Industrie 4.0“ und weitere Schlagwörter wie „Big Data“, „Internet der Dinge“ oder „Cyber-physical Systems“ werden gegenwärtig in der Wirtschaft häufig aufgegriffen. Ausgangspunkt hierfür ist die Vernetzung von IT-Technologien sowie die durchgängige Digitalisierung. Nicht nur die Geschäftsfelder und Business-Modelle der Unternehmen selbst unterliegen dabei ei-nem entsprechend radikalen Wandel, dieser bezieht sich auch auf die Arbeitsumgebungen der Mitarbeiter, sowie den privaten und den öffentlichen Raum (Botthof, 2015; Hartmann, 2015).
Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture
(2019)
In order to establish long-term relationships with users, social companion robots and their behaviors need to be comprehensible. Purely reactive behavior such as answering questions or following commands can be readily interpreted by users. However, the robot's proactive behaviors, included in order to increase liveliness and improve the user experience, often raise a need for explanation. In this paper, we provide a concept to produce accessible “why-explanations” for the goal-directed behavior an autonomous, lively robot might produce. To this end we present an architecture that provides reasons for behaviors in terms of comprehensible needs and strategies of the robot, and we propose a model for generating different kinds of explanations.
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.
Towards explaining deep learning networks to distinguish facial expressions of pain and emotions
(2018)
Deep learning networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep learning methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI method Layer-wise Relevance Propagation (LRP) and apply it to explain how a deep learning network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.