Refine
Departments, institutes and facilities
Document Type
- Conference Object (40)
- Article (19)
- Book (monograph, edited volume) (2)
- Contribution to a Periodical (1)
Year of publication
Has Fulltext
- no (62) (remove)
Keywords
- 3D user interface (6)
- virtual reality (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Virtual Reality (2)
- Virtual reality (2)
- guidance (2)
- haptics (2)
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Environment monitoring using multiple observation cameras is increasingly popular. Different techniques exist to visualize the incoming video streams, but only few evaluations are available to find the best suitable one for a given task and context. This article compares three techniques for browsing video feeds from cameras that are located around the user in an unstructured manner. The techniques allow mobile users to gain extra information about the surroundings, the objects and the actors in the environment by observing a site from different perspectives. The techniques relate local and remote cameras topologically, via a tunnel, or via bird's eye viewpoint. Their common goal is to enhance spatial awareness of the viewer, without relying on a model or previous knowledge of the environment. We introduce several factors of spatial awareness inherent to multi-camera systems, and present a comparative evaluation of the proposed techniques with respect to spatial understanding and workload.
In this paper, we report on four generations of display-sensor platforms for handheld augmented reality. The paper is organized as a compendium of requirements that guided the design and construction of each generation of the handheld platforms. The first generation, reported in [17]), was a result of various studies on ergonomics and human factors. Thereafter, each following iteration in the design-production process was guided by experiences and evaluations that resulted in new guidelines for future versions. We describe the evolution of hardware for handheld augmented reality, the requirements and guidelines that motivated its construction.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
Supported by their large size and high resolution, display walls suit well for different collaboration types. However, in order to foster instead of impede collaboration processes, interaction techniques need to be carefully designed, taking into regard the possibilities and limitations of the display size, and their effects on human perception and performance. In this paper we investigate the impact of visual distractors (which, for instance, might be caused by other collaborators' input) in peripheral vision on short-term memory and attention. The distractors occur frequently when multiple users collaborate in large wall display systems and may draw attention away from the main task, as such potentially affecting performance and cognitive load. Yet, the effect of these distractors is hardly understood. Gaining a better understanding thus may provide valuable input for designing more effective user interfaces. In this article, we report on two interrelated studies that investigated the effect of distractors. Depending on when the distractor is inserted in the task performance sequence, as well as the location of the distractor, user performance can be disturbed: we will show that distractors may not affect short term memory, but do have an effect on attention. We will closely look into the effects, and identify future directions to design more effective interfaces.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.