006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (73) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (56)
- Institute of Visual Computing (IVC) (29)
- Fachbereich Wirtschaftswissenschaften (11)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (10)
- Institut für Verbraucherinformatik (IVI) (8)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (5)
- Graduierteninstitut (3)
- Institut für KI und Autonome Systeme (A2S) (3)
- Institut für Cyber Security & Privacy (ICSP) (2)
Document Type
- Conference Object (31)
- Article (26)
- Part of a Book (5)
- Preprint (5)
- Doctoral Thesis (3)
- Report (2)
- Research Data (1)
Year of publication
Language
- English (73) (remove)
Keywords
- Augmented Reality (5)
- Machine Learning (4)
- Knowledge Graphs (3)
- Virtual Reality (3)
- haptics (3)
- virtual reality (3)
- 3D user interface (2)
- Bioinformatics (2)
- Natural Language Processing (2)
- Ray tracing (2)
- Robotics (2)
- Skin detection (2)
- Transformers (2)
- authoring tools (2)
- biometrics (2)
- guidance (2)
- mixed reality (2)
- prototyping (2)
- 3D navigation (1)
- 450 MHz (1)
- AI usage in sports (1)
- AR (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Agile software development (1)
- Applications in Energy Transport (1)
- Artificial Intelligence (1)
- Auditory Cueing (1)
- Automatic Differentiation (1)
- Ball Tracking (1)
- Bayesian Deep Learning (1)
- Behaviour-Driven Development (1)
- Camera selection (1)
- Camera view analysis (1)
- Case study (1)
- Classifiers (1)
- Codes (1)
- Collaborating industrial robots (1)
- Complex Systems Modeling and Simulation (1)
- Complexity (1)
- Compliant fingers (1)
- Computational fluid dynamics (1)
- Computergrafik (1)
- Concurrent repeated failure prognosis (1)
- Conformation (1)
- Crossmedia (1)
- Crystal structure (1)
- Current research information systems (1)
- Cybersickness (1)
- Data Fusion (1)
- Data structures (1)
- Dementia (1)
- Demonstration-based training (1)
- Design (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Diagnostic bond graph-based online fault diagnosis (1)
- Distance Perception (1)
- Drosophila (1)
- Embedded system (1)
- Entropy (1)
- Exergame (1)
- Facial Emotion Recognition (1)
- Feedback (1)
- Flow control (1)
- Fluency (1)
- Forests (1)
- Functional safety (1)
- Games and Simulations for Learning (1)
- Geometry (1)
- Graph embeddings (1)
- Graph theory (1)
- Guidelines (1)
- HCI (1)
- HDBR (1)
- Hardware (1)
- Head-mounted Display (1)
- Higher education (1)
- Human factors (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Human-Food-Interaction (1)
- Hyperspectral image (1)
- ICT (1)
- IEC 104 (1)
- IEC 61850 (1)
- Increasing fault magnitude (1)
- Inductive Logic Programming (1)
- Information Security (1)
- Instruction design (1)
- Intermittent faults (1)
- Kinect (1)
- LTE-M (1)
- Language learning (1)
- Lattice Boltzmann Method (1)
- Ligands (1)
- Locomotion (1)
- MQTT (1)
- MR (1)
- Mathematical methods (1)
- Microgravity (1)
- Mixed Reality (1)
- Model-driven engineering (1)
- Molecular structure (1)
- Motion Sickness (1)
- Multi-camera (1)
- NIR-point sensor (1)
- NLP (1)
- Navigation (1)
- Negotiation of Taste (1)
- Neural representations (1)
- Neuroscience (1)
- Non-linear systems (1)
- OCT (1)
- Object-Based Image Analysis (OBIA) (1)
- Optical Flow (1)
- Out-of-view Objects (1)
- PAD (1)
- Perception (1)
- Perceptual Upright (1)
- Pronunciation (1)
- Proximity (1)
- Psychology (1)
- Pytorch (1)
- Qualitative study (1)
- Raman microscopy (1)
- Real-Time Image Processing (1)
- Reasoning (1)
- Recommender systems (1)
- Remaining Useful Life (RUL) estimates (1)
- Requirements (1)
- Requirements Engineering (1)
- Review (1)
- Robust grasping (1)
- SMPA loop (1)
- Semantic search (1)
- Serious Games (1)
- Slippage detection (1)
- Smart Grid (1)
- Smart InGaAs camera-system (1)
- Spectroscopy (1)
- Spherical Treadmill (1)
- Survey (1)
- Taste (1)
- Three-dimensional displays (1)
- Topology (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- Tree Stumps (1)
- UAV (1)
- Ultrasonic array (1)
- Uncertainty Quantification (1)
- Underwater (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Usable Security and Privacy (1)
- User Interface Design (1)
- User centered design (1)
- User experience design (1)
- User feedback (1)
- User-Centered Design (1)
- User-centered privacy engineering (1)
- VR (1)
- Videogame (1)
- View selection (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visuelle Wahrnehmung (1)
- Vulnerable Groups (1)
- XR (1)
- adaptive trigger (1)
- aerodynamics (1)
- analog/digital signal processing (1)
- assistive robotics (1)
- audio-tactile feedback (1)
- augmented reality (1)
- authentication (1)
- authoring (1)
- brightfield microscopy (1)
- co-design (1)
- collision (1)
- component analyses (1)
- computer vision (1)
- controller design (1)
- depth perception (1)
- dynamic vector fields (1)
- elite sports (1)
- explainable AI (1)
- fingerprint (1)
- fitness-fatigue model (1)
- flight zone (1)
- geofence (1)
- head down bed rest (1)
- image fusion (1)
- interaction design (1)
- interactive computer graphics (1)
- interface design (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- multisensory (1)
- navigational search (1)
- near infrared (1)
- neural networks (1)
- neutral buoyancy (1)
- optic flow (1)
- optical coherence tomography (1)
- optical sensor (1)
- pansharpening (1)
- path tracing (1)
- performance modeling (1)
- performance prediction (1)
- practitioners (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- psychophysics (1)
- real-time (1)
- reinforcement learning (1)
- remote sensing (1)
- robot behaviour model (1)
- robot personalisation (1)
- self-motion perception (1)
- sensor resilience (1)
- sensory perception (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- training performance relationship (1)
- user modelling (1)
- vection (1)
- vibration (1)
- virtual reality, XR (1)
- weight perception (1)
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
In this paper, we introduce an optical sensor system, which is integrated into an industrial push-button. The sensor allows to classify the type of material that is in contact with the button when pressed into different material categories on the basis of the material's so called "spectral signature". An approach for a safety sensor system at circular table saws on the same base has been introduced previously on SIAS-2007. This contactless working sensor is able to distinguish reliably between skin, textiles, leather and various other kinds of materials. A typical application for this intelligent push-button is the use at possibly dangerous machines, whose operating instructions include either the prohibition or the obligation to wear gloves during the work at the machine. An exemple of machines at which no gloves are allowed are pillar drilling machines, because of the risk of getting caught in the drill chuck and being turned in by the machine. In many cases this causes very serious hand injuries. Depending on the application needs, the sensor system integrated into the push-button can be configured flexibly by software to prevent the operator from accidentally starting a machine with or without gloves, which can decrease the risk of severe accidents significantly. Especially two-hand controls are incentive to manipulation for easier handling. By equipping both push-buttons of a two-hand control with material classification properties, the user is forced to operate the controls with his bare fingers. That limitation disallows the manipulation of a two-hand control by a simple rodding device.
Noncooperative Game Theory
(2016)
The detection of human skin in images is a very desirable feature for applications such as biometric face recognition, which is becoming more frequently used for, e.g., automated border or access control. However, distinguishing real skin from other materials based on imagery captured in the visual spectrum alone and in spite of varying skin types and lighting conditions can be dicult and unreliable. Therefore, spoofing attacks with facial disguises or masks are still a serious problem for state of the art face recognition algorithms. This dissertation presents a novel approach for reliable skin detection based on spectral remission properties in the short-wave infrared (SWIR) spectrum and proposes a cross-modal method that enhances existing solutions for face verification to ensure the authenticity of a face even in the presence of partial disguises or masks. Furthermore, it presents a reference design and the necessary building blocks for an active multispectral camera system that implements this approach, as well as an in-depth evaluation. The system acquires four-band multispectral images within T = 50ms. Using a machine-learning-based classifier, it achieves unprecedented skin detection accuracy, even in the presence of skin-like materials used for spoofing attacks. Paired with a commercial face recognition software, the system successfully rejected all evaluated attempts to counterfeit a foreign face.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
In this paper, we provide a participatory design study of a mobile health platform for older adults that provides an integrative perspective on health data collected from different devices and apps. We illustrate the diversity and complexity of older adults’ perspectives in the context of health and technology use, the challenges which follow on for the design of mobile health platforms that support active and healthy ageing (AHA) and our approach to addressing these challenges through a participatory design (PD) process. Interviews were conducted with older adults aged 65+ in a two-month study with the goal of understanding perspectives on health and technologies for AHA support. We identified challenges and derived design ideas for a mobile health platform called “My-AHA”. For researchers in this field, the structured documentation of our procedures and results, as well as the implications derived provide valuable insights for the design of mobile health platforms for older adults.
This work addresses the issue of finding an optimal flight zone for a side-by-side tracking and following Unmanned Aerial Vehicle(UAV) adhering to space-restricting factors brought upon by a dynamic Vector Field Extraction (VFE) algorithm. The VFE algorithm demands a relatively perpendicular field of view of the UAV to the tracked vehicle, thereby enforcing the space-restricting factors which are distance, angle and altitude. The objective of the UAV is to perform side-by-side tracking and following of a lightweight ground vehicle while acquiring high quality video of tufts attached to the side of the tracked vehicle. The recorded video is supplied to the VFE algorithm that produces the positions and deformations of the tufts over time as they interact with the surrounding air, resulting in an airflow model of the tracked vehicle. The present limitations of wind tunnel tests and computational fluid dynamics simulation suggest the use of a UAV for real world evaluation of the aerodynamic properties of the vehicle’s exterior. The novelty of the proposed approach is alluded to defining the specific flight zone restricting factors while adhering to the VFE algorithm, where as a result we were capable of formalizing a locally-static and a globally-dynamic geofence attached to the tracked vehicle and enclosing the UAV.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Foreword to the Special Section on the Symposium on Virtual and Augmented Reality 2019 (SVR 2019)
(2020)
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models (KGEMs). However, representations based on a single modality are inherently limited. To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs. This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler (INDRA) consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against two baseline models trained on either one of the modalities (i.e., text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.083. Additionally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications. Finally, the source code and pre-trained STonKGs models are available at https://github.com/stonkgs/stonkgs and https://huggingface.co/stonkgs/stonkgs-150k.