Refine
Departments, institutes and facilities
Document Type
- Conference Object (39)
- Article (25)
- Book (monograph, edited volume) (2)
- Contribution to a Periodical (1)
Year of publication
Language
- English (67) (remove)
Keywords
- 3D user interface (7)
- virtual reality (6)
- haptics (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Perception (2)
- Virtual Reality (2)
- Virtual reality (2)
- guidance (2)
- human factors (2)
- interface design (2)
- leaning (2)
- multisensory cues (2)
- navigation (2)
- peripheral vision (2)
- spatial updating (2)
- vibration (2)
- 3D User Interface (1)
- 3D interfaces (1)
- 3D navigation (1)
- Active locomotion (1)
- Adaptive Control (1)
- Auditory Cueing (1)
- Camera selection (1)
- Camera view analysis (1)
- Challenges (1)
- Co-located work (1)
- Cognitive informatics (1)
- Cybersickness (1)
- Demonstration-based training (1)
- Entropy (1)
- Feedback (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Games (1)
- Geospatial modeling (1)
- Group behavior analysis (1)
- Hand Guidance (1)
- Handheld Augmented Reality (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Human computer interaction (1)
- Human factors (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instruction design (1)
- Large display interaction (1)
- Large, high-resolution displays (1)
- Lighting simulation (1)
- Locomotion (1)
- Methodologies (1)
- Mobile spatial interaction (1)
- Motion Sickness (1)
- Multi-Modal Interaction (1)
- Multi-camera (1)
- Multilayer interaction (1)
- Multisensory cues (1)
- Navigation interface (1)
- Online 3D reconstruction (1)
- Out-of-view Objects (1)
- Performance (1)
- Presence (1)
- Proximity (1)
- Recommender systems (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Touchscreen interaction (1)
- Travel Techniques (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- VR (1)
- View selection (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visualization (1)
- adaptive trigger (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- back-of-device interaction (1)
- bass-shaker (1)
- body-centric cues (1)
- collaboration (1)
- collision (1)
- controller design (1)
- cybersickness (1)
- depth perception (1)
- embodied interfaces (1)
- flying (1)
- full-body interface (1)
- gaming (1)
- hand guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- interaction techniques (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- locomotion (1)
- locomotion interface (1)
- mobile applications (1)
- mobile projection (1)
- motion cueing (1)
- motion sickness (1)
- multi-layer display (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- pen interaction (1)
- peripheral visual field (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- robotics (1)
- see-through head-mounted displays (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- short-term memory (1)
- situation awareness (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- teleportation (1)
- telepresence (1)
- travel techniques (1)
- user engagement (1)
- user study (1)
- vection (1)
- view management (1)
- virtual environments (1)
- virtual locomotion (1)
- visuohaptic feedback (1)
- weight perception (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.