Refine
Department, Institute
Document Type
- Conference Object (34)
- Article (14)
- Book (2)
Year of publication
Keywords
- 3D user interface (5)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Virtual Reality (2)
- Virtual reality (2)
- haptics (2)
- human factors (2)
- interface design (2)
- navigation (2)
In particular driven by today’s game console technology, the number of 3D interaction techniques that integrate multiple modalities is steadily increasing. However, many developers do not fully explore and deploy the sensorimotor possibilities of the human body, partly because of methodological and knowledge limitations. In this paper, we propose a design approach for 3D interaction techniques, which considers the full potential of the human body. We show how “human potential” can be analyzed and how such analysis can be instrumental in designing new or alternative multi-sensory and potentially full body interfaces.
We report on two experiments that deploy low-frequency audio and strong vibrations to induce haptic-like sensations throughout the human body. Vibration is quite frequently deployed in immersive systems, for example to provide collision feedback, but its actual effects are not well understood [Kruijff & Pander 2005; Kruijff et al. 2015]. The starting point of our experiments was a study by Rasmussen [Rasmussen 1982], which found that different vibration frequencies were experienced differently throughout the body. We will show how vibrations affect sensations throughout the body and may provide some directional cues to some parts of the body, yet also illustrate the difficulties.
In this paper, we present an Augmented Reality (AR) system for aiding field workers of utility companies in outdoor tasks such as maintenance, planning or surveying of underground infrastructure. Our work addresses these issues using spatial interaction and visualization techniques for mobile AR applications and as well as for a new mobile device design. We also present results from evaluations of the prototype application for underground infrastructure spanning various user groups. Our application has been driven by feedback from industrial collaborators in the utility sector, and includes a translation tool for automatically importing data from utility company databases of underground assets.
3D User Interfaces
(2005)
Environment monitoring using multiple observation cameras is increasingly popular. Different techniques exist to visualize the incoming video streams, but only few evaluations are available to find the best suitable one for a given task and context. This article compares three techniques for browsing video feeds from cameras that are located around the user in an unstructured manner. The techniques allow mobile users to gain extra information about the surroundings, the objects and the actors in the environment by observing a site from different perspectives. The techniques relate local and remote cameras topologically, via a tunnel, or via bird's eye viewpoint. Their common goal is to enhance spatial awareness of the viewer, without relying on a model or previous knowledge of the environment. We introduce several factors of spatial awareness inherent to multi-camera systems, and present a comparative evaluation of the proposed techniques with respect to spatial understanding and workload.
In this paper, we report on four generations of display-sensor platforms for handheld augmented reality. The paper is organized as a compendium of requirements that guided the design and construction of each generation of the handheld platforms. The first generation, reported in [17]), was a result of various studies on ergonomics and human factors. Thereafter, each following iteration in the design-production process was guided by experiences and evaluations that resulted in new guidelines for future versions. We describe the evolution of hardware for handheld augmented reality, the requirements and guidelines that motivated its construction.
This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.
Supported by their large size and high resolution, display walls suit well for different collaboration types. However, in order to foster instead of impede collaboration processes, interaction techniques need to be carefully designed, taking into regard the possibilities and limitations of the display size, and their effects on human perception and performance. In this paper we investigate the impact of visual distractors (which, for instance, might be caused by other collaborators' input) in peripheral vision on short-term memory and attention. The distractors occur frequently when multiple users collaborate in large wall display systems and may draw attention away from the main task, as such potentially affecting performance and cognitive load. Yet, the effect of these distractors is hardly understood. Gaining a better understanding thus may provide valuable input for designing more effective user interfaces. In this article, we report on two interrelated studies that investigated the effect of distractors. Depending on when the distractor is inserted in the task performance sequence, as well as the location of the distractor, user performance can be disturbed: we will show that distractors may not affect short term memory, but do have an effect on attention. We will closely look into the effects, and identify future directions to design more effective interfaces.
Head-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.
In this article, we report on a user study investigating the effects of multisensory cues on triggering the emotional response in immersive games. Yet, isolating the effect of a specific sensory cue on the emotional state is a difficult feat. The performed experiment is a first of a series that aims at producing usable guidelines that can be applied to reproducing similar emotional responses, as well as the methods to measure the effects. As such, we are interested in methodologies to both design effective stimuli, and assess the quality and effect thereof. We start with identifying main challenges and the followed methodology. Thereafter, we closely analyze the study results to address some of the challenges, and identify where the potential is for improving the induced stimuli (cause) and effect, as well as the analytical methods used to pinpoint the extent of the effect.
This paper introduces a novel zooming interface deploying a pico projector that, instead of a second visual display, leverages audioscapes for contextual information. The technique enhances current flashlight metaphor approaches, supporting flexible usage within the domain of spatial augmented reality to focus on object or environment-related details. Within a user study we focused on quantifying the projection limitations related to depiction of details through the pico projector and validated the interaction approach. The quantified results of the study correlate pixel density, detail and proximity, which can greatly aid to design more effective, legible zooming interfaces for pico projectors - the study can form an example testbed that can be applied well for testing aberrations with other projectors. Furthermore, users rated the zooming technique using audioscapes well, showing the validity of the approach. The studies form the foundation for extending our work by detailing out the audio-visual approach and looking more closely in the role of real-world features on interpreting projected content.
In this paper, we report on novel zooming interface methods that deploy a small handheld projector. Using mobile projections to visualize object/environment related information on real objects introduces new aspects for zooming interfaces. Different approaches are investigated that focus on maintaining a level of context while exploring detail in information. Doing so, we propose methods that provide alternative contextual cues within a single projector, and deploy the potential of zoom lenses to support a multi-level zooming approach. Furthermore, we look into the correlation between pixel density, distance to target and projection size. Alongside these techniques, we report on multiple user studies in which we quantified the projection limitations and validated various interactive visualization approaches. Thereby, we focused on solving issues related to pixel density, brightness and contrast that affect the design of more effective, legible zooming interfaces for handheld projectors.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
Enhancing touch screen interfaces through non-visual cues has been shown to improve performance. In this paper we report on a novel system that explores the usage of a forcesensitive motion-platform enhanced tablet interface to improve multi-modal interaction based on visuo-haptic instead of tactile feedback. Extending mobile touch screen with force-sensitive haptic feedback has potential to enhance performance interacting with GUIs and to improve perception of understanding relations. A user study was performed to determine the perceived recognition of different 3D shapes and the perception of different heights. Furthermore, two application scenarios are proposed to explore our proposed visuo-haptic system. The studies show the positive stance towards the feedback, as well as the found limitations related to perception of feedback.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
In this article, we report on challenges and potential methodologies to support the design and validation of multisensory techniques. Such techniques can be used for enhancing engagement in immersive systems. Yet, designing effective techniques requires careful analysis of the effect of different cues on user engagement. The level of engagement spans the general level of presence in an environment, as well as the specific emotional response to a set trigger. Yet, measuring and analyzing the actual effect of cues is hard as it spans numerous interconnected issues. In this article, we identify the different challenges and potential validation methodologies that affect the analysis of multisensory cues on user engagement. In doing so, we provide an overview of issues and potential validation directions as an entry point for further research. The various challenges are supported by lessons learned from a pilot study, which focused on reflecting the initial validation methodology by analyzing the effect of different stimuli on user engagement.
Recent studies have shown that through a careful combination of multiple sensory channels, so called multisensory binding effects can be achieved that can be beneficial for collision detection and texture recognition feedback. During the design of a new pen-input device called Tactylus, specific focus was put on exploring multisensory effects of audiotactile cues to create a new, but effective way to interact in virtual environments with the purpose to overcome several of the problems noticed in current devices.
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
A wide field of view augmented reality display is a special type of head-worn device that enables users to view augmentations in the peripheral visual field. However, the actual effects of a wide field of view display on the perception of augmentations have not been widely studied. To improve our understanding of this type of display when conducting divided attention search tasks, we conducted an in depth experiment testing two view management methods, in-view and in-situ labelling. With in-view labelling, search target annotations appear on the display border with a corresponding leader line, whereas in-situ annotations appear without a leader line, as if they are affixed to the referenced objects in the environment. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100 degrees of field of view. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130 degrees of field of view. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision, and that there is little impact of field of view on response time and mental workload.
Within this article, we offer a new design perspective for the analysis, creation, and validation of 3D user interfaces using assistive technology as source of inspiration. While 3DUI design has matured over the last decade, many open issues remain to be solved. An assistive technology design perspective can aid: it can offer a stringent test environment to uncover issues and provides a different view on design by looking at human potential. Subsequently, we will look at major fields of interest, identifying pitfalls in 3DUI design and how assistive technology can be used to overcome these issues, outlining particular fields of research, or research directions, that deserve further attention.
In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.
This paper focuses on the design of devices for handheld spatial interaction. In particular, it addresses the requirements and construction of a new platform for interactive AR, described from an ergonomics stance, prioritizing human factors of spatial interaction. The result is a multi-configurable platform for spatial interaction, evaluated in two AR application scenarios. The user tests validate the design with regards to grip, weight balance and control allocation, and provide new insights on the human factors involved in handheld spatial interaction.
Human beings spend much time under the influence of artificial lighting. Often, it is beneficial to adapt lighting to the task, as well as the user’s mental and physical constitution and well-being. This formulates new requirements for lighting - human-centric lighting - and drives a need for new light control methods in interior spaces. In this paper we present a holistic system that provides a novel approach to human-centric lighting by introducing simulation methods into interactive light control, to adapt the lighting based on the user's needs. We look at a simulation and evaluation platform that uses interactive stochastic spectral rendering methods to simulate light sources, allowing for their interactive adjustment and adaption.
In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs.