Refine
Departments, institutes and facilities
Document Type
- Conference Object (40)
- Article (25)
- Book (monograph, edited volume) (2)
- Contribution to a Periodical (1)
Year of publication
Keywords
- 3D user interface (7)
- virtual reality (6)
- haptics (4)
- Augmented Reality (3)
- 3D user interfaces (2)
- Augmented reality (2)
- Navigation (2)
- Perception (2)
- Virtual Reality (2)
- Virtual reality (2)
- guidance (2)
- human factors (2)
- interface design (2)
- leaning (2)
- multisensory cues (2)
- navigation (2)
- peripheral vision (2)
- spatial updating (2)
- vibration (2)
- 3D User Interface (1)
- 3D interfaces (1)
- 3D navigation (1)
- Active locomotion (1)
- Adaptive Control (1)
- Auditory Cueing (1)
- Camera selection (1)
- Camera view analysis (1)
- Challenges (1)
- Co-located work (1)
- Cognitive informatics (1)
- Cybersickness (1)
- Demonstration-based training (1)
- Entropy (1)
- Feedback (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Games (1)
- Geospatial modeling (1)
- Group behavior analysis (1)
- Hand Guidance (1)
- Handheld Augmented Reality (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Human computer interaction (1)
- Human factors (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instruction design (1)
- Large display interaction (1)
- Large, high-resolution displays (1)
- Lighting simulation (1)
- Locomotion (1)
- Methodologies (1)
- Mobile spatial interaction (1)
- Motion Sickness (1)
- Multi-Modal Interaction (1)
- Multi-camera (1)
- Multilayer interaction (1)
- Multisensory cues (1)
- Navigation interface (1)
- Online 3D reconstruction (1)
- Out-of-view Objects (1)
- Performance (1)
- Presence (1)
- Proximity (1)
- Recommender systems (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Touchscreen interaction (1)
- Travel Techniques (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- VR (1)
- View selection (1)
- Visual Cueing (1)
- Visual Discrimination (1)
- Visualization (1)
- adaptive trigger (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- back-of-device interaction (1)
- bass-shaker (1)
- body-centric cues (1)
- collaboration (1)
- collision (1)
- controller design (1)
- cybersickness (1)
- depth perception (1)
- embodied interfaces (1)
- flying (1)
- full-body interface (1)
- gaming (1)
- hand guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- interaction techniques (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- locomotion (1)
- locomotion interface (1)
- mobile applications (1)
- mobile projection (1)
- motion cueing (1)
- motion sickness (1)
- multi-layer display (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- pen interaction (1)
- peripheral visual field (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- robotics (1)
- see-through head-mounted displays (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- short-term memory (1)
- situation awareness (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- teleportation (1)
- telepresence (1)
- travel techniques (1)
- user engagement (1)
- user study (1)
- vection (1)
- view management (1)
- virtual environments (1)
- virtual locomotion (1)
- visuohaptic feedback (1)
- weight perception (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.