Refine
H-BRS Bibliography
- yes (54) (remove)
Departments, institutes and facilities
Document Type
- Conference Object (35)
- Article (17)
- Book (monograph, edited volume) (1)
- Contribution to a Periodical (1)
Year of publication
Has Fulltext
- no (54) (remove)
Keywords
- 3D user interface (6)
- virtual reality (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Virtual Reality (2)
- Virtual reality (2)
- guidance (2)
- haptics (2)
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Supported by their large size and high resolution, display walls suit well for different collaboration types. However, in order to foster instead of impede collaboration processes, interaction techniques need to be carefully designed, taking into regard the possibilities and limitations of the display size, and their effects on human perception and performance. In this paper we investigate the impact of visual distractors (which, for instance, might be caused by other collaborators' input) in peripheral vision on short-term memory and attention. The distractors occur frequently when multiple users collaborate in large wall display systems and may draw attention away from the main task, as such potentially affecting performance and cognitive load. Yet, the effect of these distractors is hardly understood. Gaining a better understanding thus may provide valuable input for designing more effective user interfaces. In this article, we report on two interrelated studies that investigated the effect of distractors. Depending on when the distractor is inserted in the task performance sequence, as well as the location of the distractor, user performance can be disturbed: we will show that distractors may not affect short term memory, but do have an effect on attention. We will closely look into the effects, and identify future directions to design more effective interfaces.
Human beings spend much time under the influence of artificial lighting. Often, it is beneficial to adapt lighting to the task, as well as the user’s mental and physical constitution and well-being. This formulates new requirements for lighting - human-centric lighting - and drives a need for new light control methods in interior spaces. In this paper we present a holistic system that provides a novel approach to human-centric lighting by introducing simulation methods into interactive light control, to adapt the lighting based on the user's needs. We look at a simulation and evaluation platform that uses interactive stochastic spectral rendering methods to simulate light sources, allowing for their interactive adjustment and adaption.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.