006 Spezielle Computerverfahren
Refine
Departments, institutes and facilities
- Fachbereich Informatik (25)
- Institute of Visual Computing (IVC) (14)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Fachbereich Wirtschaftswissenschaften (2)
- Institut für Sicherheitsforschung (ISF) (2)
- Institut für KI und Autonome Systeme (A2S) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Article (38) (remove)
Year of publication
Keywords
- 3D user interface (2)
- Automatic pain detection (2)
- Machine learning (2)
- biometrics (2)
- deep learning (2)
- haptics (2)
- virtual reality (2)
- 3D navigation (1)
- AI usage in sports (1)
- AR (1)
- Action Unit detection (1)
- Altenhilfe (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Augmented Reality (1)
- Blasendiagramm (1)
- Business Process Intelligence (1)
- Camera selection (1)
- Camera view analysis (1)
- Classifiers (1)
- Codes (1)
- Complexity (1)
- Conformation (1)
- Crystal structure (1)
- Current research information systems (1)
- Curriculum (1)
- Cybersickness (1)
- Data structures (1)
- Datenanalyse (1)
- Dementia (1)
- Demenz (1)
- Demonstration-based training (1)
- Disco (1)
- Educational Data Mining (1)
- Educational Process Mining (1)
- Emotion (1)
- Entropy (1)
- Exergames (1)
- Explainable artificial intelligence (1)
- Facial expression (1)
- Fall prevention (1)
- Fallbeschreibung (1)
- Feedback (1)
- Fluency (1)
- Fuzzy Mining (1)
- Gaussian state estimation (1)
- Geometry (1)
- Geschäftsprozess (1)
- Graph embeddings (1)
- Graph theory (1)
- HDBR (1)
- Hardware (1)
- Head-mounted Display (1)
- Human orientation perception (1)
- ICT Design (1)
- Inductive Visual Mining (1)
- Instruction design (1)
- Knowledge Graphs (1)
- Language learning (1)
- Langzeitbehandlung (1)
- Ligands (1)
- Locomotion (1)
- Machine Learning (1)
- Mathematical methods (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NLP (1)
- Neuroscience (1)
- OCT (1)
- Older adults (1)
- Out-of-view Objects (1)
- PAD (1)
- Pain diagnostics (1)
- Perception (1)
- Perceptual Upright (1)
- Pflegepersonal (1)
- ProM (1)
- Process Mining (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- RapidMiner (1)
- Ray tracing (1)
- Recommender systems (1)
- SMPA loop (1)
- Semantic search (1)
- Skin detection (1)
- Spectroscopy (1)
- Studenten (1)
- Studienverlauf (1)
- Technologie (1)
- Three-dimensional displays (1)
- Topology (1)
- Travel Techniques (1)
- UAV (1)
- Unterstützung (1)
- VR (1)
- View selection (1)
- Virtual Reality (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Wearables (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- appraisal theory (1)
- assistive robotics (1)
- authentication (1)
- autonomous explanation generation (1)
- collision (1)
- computer vision (1)
- controller design (1)
- driver distraction (1)
- dynamic vector fields (1)
- elite sports (1)
- emotion inference (1)
- emotion recognition (1)
- explainability (1)
- explainable AI (1)
- facial expression analysis (1)
- facial expressions (1)
- facial expressions of pain (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- human-robot interaction (HRI) (1)
- interaction architecture (1)
- interactive computer graphics (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- mixed reality (1)
- navigational search (1)
- near infrared (1)
- optical coherence tomography (1)
- optical sensor (1)
- pain datasets (1)
- pain feature representation (1)
- pain recognition (1)
- performance modeling (1)
- performance prediction (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- reinforcement learning (1)
- robot behaviour model (1)
- robot personalisation (1)
- sensor resilience (1)
- sensors (1)
- social robots (1)
- socio-interactive explanation generation (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- survey (1)
- training performance relationship (1)
- transparency (1)
- user modelling (1)
- user-centered explanation generation (1)
- vibration (1)
- weight perception (1)
BACKGROUND
Given the unreliable self-report in patients with dementia, pain assessment should also rely on the observation of pain behaviors, such as facial expressions. Ideal observers should be well trained and should observe the patient continuously in order to pick up any pain-indicative behavior; which are requisitions beyond realistic possibilities of pain care. Therefore, the need for video-based pain detection systems has been repeatedly voiced. Such systems would allow for constant monitoring of pain behaviors and thereby allow for a timely adjustment of pain management in these fragile patients, who are often undertreated for pain.
METHODS
In this road map paper we describe an interdisciplinary approach to develop such a video-based pain detection system. The development starts with the selection of appropriate video material of people in pain as well as the development of technical methods to capture their faces. Furthermore, single facial motions are automatically extracted according to an international coding system. Computer algorithms are trained to detect the combination and timing of those motions, which are pain-indicative.
RESULTS/CONCLUSION
We hope to encourage colleagues to join forces and to inform end-users about an imminent solution of a pressing pain-care problem. For the near future, implementation of such systems can be foreseen to monitor immobile patients in intensive and postoperative care situations.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.