Prof. Dr. André Hinkenjann
Refine
H-BRS Bibliography
- yes (87)
Departments, institutes and facilities
Document Type
- Conference Object (70)
- Article (10)
- Conference Proceedings (2)
- Report (2)
- Part of a Book (1)
- Research Data (1)
- Preprint (1)
Year of publication
Has Fulltext
- no (87) (remove)
Keywords
- Virtual Reality (3)
- 3D user interface (2)
- 3D user interfaces (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Garbage collection (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- eye-tracking (2)
- foveated rendering (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
- AMBER (1)
- Algorithms (1)
- Augmented Reality (1)
- Bound Volume Hierarchy (1)
- Cell Processor (1)
- Challenges (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computing methodologies (1)
- Containerization (1)
- Docker (1)
- Echtzeit (1)
- Engineering (1)
- Escape analysis (1)
- Eye Tracking (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Force field (1)
- Games (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Global Illumination (1)
- Group Behavior (1)
- Group behavior (1)
- Groupware (1)
- Hand Guidance (1)
- Hand Tracking (1)
- High-resolution displays (1)
- Higher education (1)
- Human centered computing (1)
- Hydrocarbon (1)
- Image Processing (1)
- Image-based rendering (1)
- Immersion (1)
- Information interaction (1)
- Lighting simulation (1)
- Main Memory (1)
- Memory management (1)
- Methodologies (1)
- Mixed (1)
- Motion Capture (1)
- Multisensory cues (1)
- Musical Performance (1)
- Navigation interface (1)
- Neural representations (1)
- Performance (1)
- Presence (1)
- Radix Sort (1)
- Ray Casting (1)
- Ray Tracing (1)
- Ray tracing (1)
- Reflectance modeling (1)
- Render Cache (1)
- Split Axis (1)
- Survey (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Unity (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interfaces (1)
- VR (1)
- VR system design (1)
- Virtual reality (1)
- Visualization (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Volumenrendering (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- back-of-device interaction (1)
- bass-shaker (1)
- co-located collaboration (1)
- collaboration (1)
- component analyses (1)
- gaming (1)
- hand guidance (1)
- haptic interfaces (1)
- haptics (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- interaction (1)
- interaction techniques (1)
- leaning (1)
- mobile applications (1)
- mobile projection (1)
- multi-layer display (1)
- multisensory cues (1)
- path tracing (1)
- performance optimizations (1)
- peripheral vision (1)
- projection based systems (1)
- proxemics (1)
- ray tracing (1)
- real-time (1)
- scene element representation (1)
- short-term memory (1)
- software engineering (1)
- spatial augmented reality (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- tiled displays (1)
- user engagement (1)
- user study (1)
- vibration (1)
- virtual environment framework (1)
- virtual environments (1)
- virtual reality (1)
- visualisation (1)
- visuohaptic feedback (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
This paper presents groupware to study group behavior while conducting a creative task on large, high-resolution displays. Moreover, we present the results of a between-subjects study. In the study, 12 groups with two participants each prototyped a 2D level on a 7m x 2.5m large, high-resolution display using tablet-PCs for interaction. Six groups underwent a condition where group members had equal roles and interaction possibilities. Another six groups worked in a condition where group members had different roles: level designer and 2D artist. The results revealed that in the different roles condition, the participants worked significantly more tightly and created more assets. We could also detect some shortcomings for that configuration. We discuss the gained insights regarding system configuration, groupware interfaces, and groups behavior.
Foreword to the Special Section on the Symposium on Virtual and Augmented Reality 2019 (SVR 2019)
(2020)
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this article is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.
Large, high-resolution displays are highly suitable for creation of digital environments for co-located collaborative task solving. Yet, placing multiple users in a shared environment may increase the risk of interferences, thus causing mental discomfort and decreasing efficiency of the team. To mitigate interferences coordination strategies and techniques were introduced. However, in a mixed-focus collaboration scenarios users switch now and again between loosely and tightly collaboration, therefore different coordination techniques might be required depending on the current collaboration state of team members. For that, systems have to be able to recognize collaboration states as well as transitions between them to ensure a proper adjustment of the coordination strategy. Previous studies on group behavior during collaboration in front of large displays investigated solely collaborative coupling states, not transitions between them though. To address this gap, we conducted a study with 12 participant dyads in front of a tiled display and let them solve two tasks in two different conditions (focus and overview). We looked into group dynamics and categorized transitions by means of changes in proximity, verbal communication, visual attention, visual interface, and gestures. The findings can be valuable for user interface design and development of group behavior models.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.