Refine
Departments, institutes and facilities
Document Type
- Conference Object (40)
- Article (26)
- Book (monograph, edited volume) (2)
- Contribution to a Periodical (1)
Year of publication
Keywords
- 3D user interface (7)
- virtual reality (7)
- haptics (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Motion Sickness (2)
- Navigation (2)
- Perception (2)
- Virtual Reality (2)
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.
Telepresence robots allow people to participate in remote spaces, yet they can be difficult to manoeuvre with people and obstacles around. We designed a haptic-feedback system called “FeetBack," which users place their feet in when driving a telepresence robot. When the robot approaches people or obstacles, haptic proximity and collision feedback are provided on the respective sides of the feet, helping inform users about events that are hard to notice through the robot’s camera views. We conducted two studies: one to explore the usage of FeetBack in virtual environments, another focused on real environments.We found that FeetBack can increase spatial presence in simple virtual environments. Users valued the feedback to adjust their behaviour in both types of environments, though it was sometimes too frequent or unneeded for certain situations after a period of time. These results point to the value of foot-based haptic feedback for telepresence robot systems, while also the need to design context-sensitive haptic feedback.
Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays
(2020)
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces ("human joystick") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.
Recent studies have shown that through a careful combination of multiple sensory channels, so called multisensory binding effects can be achieved that can be beneficial for collision detection and texture recognition feedback. During the design of a new pen-input device called Tactylus, specific focus was put on exploring multisensory effects of audiotactile cues to create a new, but effective way to interact in virtual environments with the purpose to overcome several of the problems noticed in current devices.
In this paper, we report on four generations of display-sensor platforms for handheld augmented reality. The paper is organized as a compendium of requirements that guided the design and construction of each generation of the handheld platforms. The first generation, reported in [17]), was a result of various studies on ergonomics and human factors. Thereafter, each following iteration in the design-production process was guided by experiences and evaluations that resulted in new guidelines for future versions. We describe the evolution of hardware for handheld augmented reality, the requirements and guidelines that motivated its construction.
3D User Interfaces
(2005)
Environment monitoring using multiple observation cameras is increasingly popular. Different techniques exist to visualize the incoming video streams, but only few evaluations are available to find the best suitable one for a given task and context. This article compares three techniques for browsing video feeds from cameras that are located around the user in an unstructured manner. The techniques allow mobile users to gain extra information about the surroundings, the objects and the actors in the environment by observing a site from different perspectives. The techniques relate local and remote cameras topologically, via a tunnel, or via bird's eye viewpoint. Their common goal is to enhance spatial awareness of the viewer, without relying on a model or previous knowledge of the environment. We introduce several factors of spatial awareness inherent to multi-camera systems, and present a comparative evaluation of the proposed techniques with respect to spatial understanding and workload.