006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (25)
- Institute of Visual Computing (IVC) (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Fachbereich Wirtschaftswissenschaften (2)
- Institut für Sicherheitsforschung (ISF) (2)
- Institut für KI und Autonome Systeme (A2S) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Article (38) (remove)
Year of publication
Keywords
- 3D user interface (2)
- Automatic pain detection (2)
- Machine learning (2)
- biometrics (2)
- deep learning (2)
- haptics (2)
- virtual reality (2)
- 3D navigation (1)
- AI usage in sports (1)
- AR (1)
- Action Unit detection (1)
- Altenhilfe (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Augmented Reality (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Camera selection (1)
- Camera view analysis (1)
- Classifiers (1)
- Codes (1)
- Complexity (1)
- Conformation (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cybersickness (1)
- Data structures (1)
- Datenanalyse (1)
- Dementia (1)
- Demenz (1)
- Demonstration-based training (1)
- Disco (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Emotion (1)
- Entropy (1)
- Exergames (1)
- Explainable artificial intelligence (1)
- Facial expression (1)
- Fall prevention (1)
- Fallbeschreibung (1)
- Feedback (1)
- Fluency (1)
- Fuzzy Mining (1)
- Gaussian state estimation (1)
- Geometry (1)
- Geschäftsprozess (1)
- Graph embeddings (1)
- Graph theory (1)
- HDBR (1)
- Hardware (1)
- Head-mounted Display (1)
- Human orientation perception (1)
- ICT Design (1)
- Inductive Visual Mining (1)
- Instruction design (1)
- Knowledge Graphs (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Ligands (1)
- Locomotion (1)
- Machine Learning (1)
- Mathematical methods (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NLP (1)
- Neuroscience (1)
- OCT (1)
- Older adults (1)
- Out-of-view Objects (1)
- PAD (1)
- Pain diagnostics (1)
- Perception (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- RapidMiner (1)
- Ray tracing (1)
- Recommender systems (1)
- SMPA loop (1)
- Semantic search (1)
- Skin detection (1)
- Spectroscopy (1)
- Studenten (1)
- Studienverlauf (1)
- Technologie (1)
- Three-dimensional displays (1)
- Topology (1)
- Travel Techniques (1)
- UAV (1)
- Unterstützung (1)
- VR (1)
- View selection (1)
- Virtual Reality (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Wearables (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- appraisal theory (1)
- assistive robotics (1)
- authentication (1)
- autonomous explanation generation (1)
- collision (1)
- computer vision (1)
- controller design (1)
- driver distraction (1)
- dynamic vector fields (1)
- elite sports (1)
- emotion inference (1)
- emotion recognition (1)
- explainability (1)
- explainable AI (1)
- facial expression analysis (1)
- facial expressions (1)
- facial expressions of pain (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- human-robot interaction (HRI) (1)
- interaction architecture (1)
- interactive computer graphics (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- mixed reality (1)
- navigational search (1)
- near infrared (1)
- optical coherence tomography (1)
- optical sensor (1)
- pain datasets (1)
- pain feature representation (1)
- pain recognition (1)
- performance modeling (1)
- performance prediction (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- reinforcement learning (1)
- robot behaviour model (1)
- robot personalisation (1)
- sensor resilience (1)
- sensors (1)
- social robots (1)
- socio-interactive explanation generation (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- survey (1)
- training performance relationship (1)
- transparency (1)
- user modelling (1)
- user-centered explanation generation (1)
- vibration (1)
- weight perception (1)
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
Contextual information is widely considered for NLP and knowledge discovery in life sciences since it highly influences the exact meaning of natural language. The scientific challenge is not only to extract such context data, but also to store this data for further query and discovery approaches. Classical approaches use RDF triple stores, which have serious limitations. Here, we propose a multiple step knowledge graph approach using labeled property graphs based on polyglot persistence systems to utilize context data for context mining, graph queries, knowledge discovery and extraction. We introduce the graph-theoretic foundation for a general context concept within semantic networks and show a proof of concept based on biomedical literature and text mining. Our test system contains a knowledge graph derived from the entirety of PubMed and SCAIView data and is enriched with text mining data and domain-specific language data using Biological Expression Language. Here, context is a more general concept than annotations. This dense graph has more than 71M nodes and 850M relationships. We discuss the impact of this novel approach with 27 real-world use cases represented by graph queries. Storing and querying a giant knowledge graph as a labeled property graph is still a technological challenge. Here, we demonstrate how our data model is able to support the understanding and interpretation of biomedical data. We present several real-world use cases that utilize our massive, generated knowledge graph derived from PubMed data and enriched with additional contextual data. Finally, we show a working example in context of biologically relevant information using SCAIView.
This work addresses the issue of finding an optimal flight zone for a side-by-side tracking and following Unmanned Aerial Vehicle(UAV) adhering to space-restricting factors brought upon by a dynamic Vector Field Extraction (VFE) algorithm. The VFE algorithm demands a relatively perpendicular field of view of the UAV to the tracked vehicle, thereby enforcing the space-restricting factors which are distance, angle and altitude. The objective of the UAV is to perform side-by-side tracking and following of a lightweight ground vehicle while acquiring high quality video of tufts attached to the side of the tracked vehicle. The recorded video is supplied to the VFE algorithm that produces the positions and deformations of the tufts over time as they interact with the surrounding air, resulting in an airflow model of the tracked vehicle. The present limitations of wind tunnel tests and computational fluid dynamics simulation suggest the use of a UAV for real world evaluation of the aerodynamic properties of the vehicle’s exterior. The novelty of the proposed approach is alluded to defining the specific flight zone restricting factors while adhering to the VFE algorithm, where as a result we were capable of formalizing a locally-static and a globally-dynamic geofence attached to the tracked vehicle and enclosing the UAV.
Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, "sensitivity/recall," "precision," and "overall agreement (F1)." Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.
This paper explores the role of artificial intelligence (AI) in elite sports. We approach the topic from two perspectives. Firstly, we provide a literature based overview of AI success stories in areas other than sports. We identified multiple approaches in the area of Machine Perception, Machine Learning and Modeling, Planning and Optimization as well as Interaction and Intervention, holding a potential for improving training and competition. Secondly, we discover the present status of AI use in elite sports. Therefore, in addition to another literature review, we interviewed leading sports scientist, which are closely connected to the main national service institute for elite sports in their countries. The analysis of this literature review and the interviews show that the most activity is carried out in the methodical categories of signal and image processing. However, projects in the field of modeling & planning have become increasingly popular within the last years. Based on these two perspectives, we extract deficits, issues and opportunities and summarize them in six key challenges faced by the sports analytics community. These challenges include data collection, controllability of an AI by the practitioners and explainability of AI results.