Refine
H-BRS Bibliography
- yes (21) (remove)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (21) (remove)
Document Type
- Conference Object (13)
- Article (5)
- Book (monograph, edited volume) (2)
- Conference Proceedings (1)
Year of publication
- 2017 (21) (remove)
Keywords
- Ray Tracing (2)
- Virtual Reality (2)
- foveated rendering (2)
- 3D user interfaces (1)
- Active locomotion (1)
- Alkane (1)
- Challenges (1)
- Computer Graphics (1)
- Containerization (1)
- Distributed rendering (1)
- Docker (1)
- Einführung (1)
- Elektronik (1)
- Eye Tracking (1)
- Focus plus context (1)
- Games (1)
- Gaze Behavior (1)
- Head-mounted Display (1)
- Immersion (1)
- Intelligent virtual agents (1)
- Inventory (1)
- MP2.5 (1)
- Methodologies (1)
- Multisensory cues (1)
- Multiuser (1)
- Presence (1)
- Rendering (1)
- Serious Games (1)
- Social Virtual Reality (1)
- Synthetic perception (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Usability (1)
- User engagement (1)
- VR system design (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual reality (1)
- data analysis (1)
- eye movement (1)
- eye tracking (1)
- eye-tracking (1)
- fuel (1)
- gaze (1)
- haptics (1)
- hydrocarbon (1)
- lipid (1)
- motion cueing (1)
- natural user interface (1)
- octane (1)
- perceived quality (1)
- projection based systems (1)
- quantum mechanics (1)
- region of interest (1)
- spatial augmented reality (1)
- virtual locomotion (1)
- virtual reality (1)
- visuohaptic feedback (1)
- zooming interfaces (1)
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
Simulating eye movements for virtual humans or avatars can improve social experiences in virtual reality (VR) games, especially when wearing head mounted displays. While other researchers have already demonstrated the importance of simulating meaningful eye movements, we compare three gaze models with different levels of fidelity regarding realism: (1) a base model with static fixation and saccadic movements, (2) a proposed simulation model that extends the saccadic model with gaze shifts based on a neural network, and (3) a user's real eye movements recorded by a proprietary eye tracker. Our between-groups design study with 42 subjects evaluates impact of eye movements on social VR user experience regarding perceived quality of communication and presence. The tasks include free conversation and two guessing games in a co-located setting. Results indicate that a high quality of communication in co-located VR can be achieved without using extended gaze behavior models besides saccadic simulation. Users might have to gain more experience with VR technology before being able to notice subtle details in gaze animation. In the future, remote VR collaboration involving different tasks requires further investigation.
Populating virtual worlds with intelligent agents can drastically improve a user's sense of presence. Applying these worlds to virtual training, simulations, or (serious) games, often requires multiple agents to be simulated in real time. The process of generating believable agent behavior starts with providing a plausible perception and attention process that is both efficient and controllable. We describe a conceptual framework for synthetic perception that specifically considers the mentioned requirements: plausibility, real-time performance, and controllability. A sample implementation will focus on sensing, attention, and memory to demonstrate the framework's capabilities in a real-time game engine scenario. A combination of dynamic geometric sensing and false coloring with static saliency information is provided to exemplify the collection of environmental stimuli. The subsequent attention process handles both bottom-up processing and task-oriented, top-down factors. Behavioral results can be influenced by controlling memory and attention The example case is demonstrated and discussed alongside future extensions.