006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (25) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (20)
- Institute of Visual Computing (IVC) (12)
- Fachbereich Wirtschaftswissenschaften (4)
- Institut für Verbraucherinformatik (IVI) (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (2)
- Institut für Sicherheitsforschung (ISF) (2)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (2)
- Institut für Cyber Security & Privacy (ICSP) (1)
- Institut für KI und Autonome Systeme (A2S) (1)
- Institut für funktionale Gen-Analytik (IFGA) (1)
Document Type
- Article (18)
- Conference Object (5)
- Part of a Book (1)
- Preprint (1)
Language
- English (25)
Has Fulltext
- yes (25) (remove)
Keywords
- Augmented Reality (2)
- Knowledge Graphs (2)
- biometrics (2)
- haptics (2)
- mixed reality (2)
- virtual reality (2)
- 3D navigation (1)
- 3D user interface (1)
- AI usage in sports (1)
- AR (1)
- AR design (1)
- AR development (1)
- AR/VR (1)
- Artificial Intelligence (1)
- Ball Tracking (1)
- Bioinformatics (1)
- Camera selection (1)
- Camera view analysis (1)
- Codes (1)
- Complexity (1)
- Current research information systems (1)
- Data structures (1)
- Demonstration-based training (1)
- Design Recommendations (1)
- Design Theory and Practice (1)
- Drosophila (1)
- Entropy (1)
- Feedback (1)
- Geometry (1)
- Graph embeddings (1)
- Graph theory (1)
- Guidelines (1)
- HDBR (1)
- Hardware (1)
- Human orientation perception (1)
- Human-Centered Design (1)
- Instruction design (1)
- MR (1)
- Machine Learning (1)
- Mixed Reality (1)
- Multi-camera (1)
- NLP (1)
- Natural Language Processing (1)
- Navigation (1)
- Neuroscience (1)
- OCT (1)
- Optical Flow (1)
- PAD (1)
- Perception (1)
- Perceptual Upright (1)
- Proximity (1)
- Psychology (1)
- Ray tracing (1)
- Real-Time Image Processing (1)
- Recommender systems (1)
- Requirements Engineering (1)
- SMPA loop (1)
- Semantic search (1)
- Skin detection (1)
- Spectroscopy (1)
- Spherical Treadmill (1)
- Three-dimensional displays (1)
- Topology (1)
- Transformers (1)
- Usable Security and Privacy (1)
- User Interface Design (1)
- User experience design (1)
- User-centered privacy engineering (1)
- VR (1)
- View selection (1)
- Virtual Reality (1)
- XR (1)
- adaptive trigger (1)
- analog/digital signal processing (1)
- assistive robotics (1)
- augmented reality (1)
- authentication (1)
- authoring (1)
- authoring tools (1)
- collision (1)
- computer vision (1)
- controller design (1)
- elite sports (1)
- explainable AI (1)
- fingerprint (1)
- fitness-fatigue model (1)
- head down bed rest (1)
- interaction design (1)
- interactive computer graphics (1)
- interface design (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mathematical modeling (1)
- navigational search (1)
- near infrared (1)
- optical coherence tomography (1)
- optical sensor (1)
- performance modeling (1)
- performance prediction (1)
- practitioners (1)
- presentation attack detection (1)
- presentation attack detection (PAD) (1)
- prototyping (1)
- psychophysics (1)
- reinforcement learning (1)
- robot behaviour model (1)
- robot personalisation (1)
- sensor resilience (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- training performance relationship (1)
- user modelling (1)
- vibration (1)
- virtual reality, XR (1)
- weight perception (1)
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device’s modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
Current research in augmented, virtual, and mixed reality (XR) reveals a lack of tool support for designing and, in particular, prototyping XR applications. While recent tools research is often motivated by studying the requirements of non-technical designers and end-user developers, the perspective of industry practitioners is less well understood. In an interview study with 17 practitioners from different industry sectors working on professional XR projects, we establish the design practices in industry, from early project stages to the final product. To better understand XR design challenges, we characterize the different methods and tools used for prototyping and describe the role and use of key prototypes in the different projects. We extract common elements of XR prototyping, elaborating on the tools and materials used for prototyping and establishing different views on the notion of fidelity. Finally, we highlight key issues for future XR tools research.
Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that,we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.