Refine
Departments, institutes and facilities
Document Type
- Conference Object (40)
- Article (25)
- Book (monograph, edited volume) (2)
- Contribution to a Periodical (1)
Year of publication
Keywords
- 3D user interface (7)
- virtual reality (6)
- haptics (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Perception (2)
- Virtual Reality (2)
- Virtual reality (2)
- guidance (2)
- human factors (2)
- interface design (2)
- leaning (2)
- multisensory cues (2)
- navigation (2)
- peripheral vision (2)
- spatial updating (2)
- vibration (2)
- 3D User Interface (1)
- 3D interfaces (1)
- 3D navigation (1)
- Active locomotion (1)
- Adaptive Control (1)
- Auditory Cueing (1)
- Camera selection (1)
- Camera view analysis (1)
- Challenges (1)
- Co-located work (1)
- Cognitive informatics (1)
- Cybersickness (1)
- Demonstration-based training (1)
- Entropy (1)
- Feedback (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Games (1)
- Geospatial modeling (1)
- Group behavior analysis (1)
- Hand Guidance (1)
- Handheld Augmented Reality (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Human computer interaction (1)
- Human factors (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instruction design (1)
- Large display interaction (1)
- Large, high-resolution displays (1)
- Lighting simulation (1)
- Locomotion (1)
- Methodologies (1)
- Mobile spatial interaction (1)
- Motion Sickness (1)
- Multi-Modal Interaction (1)
- Multi-camera (1)
- Multilayer interaction (1)
- Multisensory cues (1)
- Navigation interface (1)
- Online 3D reconstruction (1)
- Out-of-view Objects (1)
- Performance (1)
- Presence (1)
- Proximity (1)
- Recommender systems (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Touchscreen interaction (1)
- Travel Techniques (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- VR (1)
- View selection (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visualization (1)
- adaptive trigger (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- back-of-device interaction (1)
- bass-shaker (1)
- body-centric cues (1)
- collaboration (1)
- collision (1)
- controller design (1)
- cybersickness (1)
- depth perception (1)
- embodied interfaces (1)
- flying (1)
- full-body interface (1)
- gaming (1)
- hand guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- interaction techniques (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- locomotion (1)
- locomotion interface (1)
- mobile applications (1)
- mobile projection (1)
- motion cueing (1)
- motion sickness (1)
- multi-layer display (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- pen interaction (1)
- peripheral visual field (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- robotics (1)
- see-through head-mounted displays (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- short-term memory (1)
- situation awareness (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- teleportation (1)
- telepresence (1)
- travel techniques (1)
- user engagement (1)
- user study (1)
- vection (1)
- view management (1)
- virtual environments (1)
- virtual locomotion (1)
- visuohaptic feedback (1)
- weight perception (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Telepresence robots allow people to participate in remote spaces, yet they can be difficult to manoeuvre with people and obstacles around. We designed a haptic-feedback system called “FeetBack," which users place their feet in when driving a telepresence robot. When the robot approaches people or obstacles, haptic proximity and collision feedback are provided on the respective sides of the feet, helping inform users about events that are hard to notice through the robot’s camera views. We conducted two studies: one to explore the usage of FeetBack in virtual environments, another focused on real environments.We found that FeetBack can increase spatial presence in simple virtual environments. Users valued the feedback to adjust their behaviour in both types of environments, though it was sometimes too frequent or unneeded for certain situations after a period of time. These results point to the value of foot-based haptic feedback for telepresence robot systems, while also the need to design context-sensitive haptic feedback.
Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays
(2020)
Using Visual and Auditory Cues to Locate Out-of-View Objects in Head-Mounted Augmented Reality
(2021)
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.