Institute of Visual Computing (IVC)
Refine
H-BRS Bibliography
- yes (313)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (313)
- Fachbereich Informatik (277)
- Fachbereich Ingenieurwissenschaften und Kommunikation (37)
- Institut für funktionale Gen-Analytik (IFGA) (30)
- Institut für Sicherheitsforschung (ISF) (7)
- Fachbereich Wirtschaftswissenschaften (4)
- Graduierteninstitut (4)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (4)
- Fachbereich Angewandte Naturwissenschaften (3)
- Zentrum für Innovation und Entwicklung in der Lehre (ZIEL) (2)
Document Type
- Conference Object (210)
- Article (66)
- Report (14)
- Part of a Book (6)
- Book (monograph, edited volume) (4)
- Conference Proceedings (4)
- Doctoral Thesis (4)
- Part of Periodical (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- FPGA (12)
- Virtual Reality (10)
- virtual reality (8)
- 3D user interface (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Virtuelle Realität (4)
- Augmented reality (3)
Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Das Interesse an Virtual Reality (VR) für die Hochschullehre steigt aktuell vermehrt durch die Möglichkeit, logistisch schwierige Aufgaben abzubilden sowie aufgrund positiver Ergebnisse aus Wirksamkeitsstudien. Gleichzeitig fehlt es jedoch an Studien, die immersive VR-Umgebungen, nicht-immersive Desktop-Umgebungen und konventionelle Lernmaterialien gegenüberstellen und lehr-lernmethodische Aspekte evaluieren. Aus diesem Grund beschäftigt sich dieser Beitrag mit der Konzeption und Realisierung einer Lernumgebung für die Hochschullehre, die sowohl mit einem Head Mounted Display (HMD) als auch mittels Desktops genutzt werden kann, sowie deren Evaluation anhand eines experimentellen Gruppendesigns. Die Lernumgebung wurde auf Basis einer eigens entwickelten Softwareplattform erstellt und die Wirksamkeit mithilfe von zwei Experimentalgruppen – VR vs. Desktop-Umgebung – und einer Kontrollgruppe evaluiert und verglichen. In einer Pilotstudie konnten sowohl qualitativ als auch quantitativ positive Einschätzungen der Usability der Lernumgebung in beiden Experimentalgruppen herausgestellt werden. Darüber hinaus zeigten sich positive Effekte auf die kognitive und affektive Wirkung der Lernumgebung im Vergleich zu konventionellen Lernmaterialien. Unterschiede zwischen der Nutzung als VR- oder Desktop-Umgebung zeigen sich auf kognitiver und affektiver Ebene jedoch kaum. Die Analyse von Log-Daten deutet allerdings auf Unterschiede im Lern- und Explorationsverhalten hin.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
It is challenging to provide users with a haptic weight sensation of virtual objects in VR since current consumer VR controllers and software-based approaches such as pseudo-haptics cannot render appropriate haptic stimuli. To overcome these limitations, we developed a haptic VR controller named Triggermuscle that adjusts its trigger resistance according to the weight of a virtual object. Therefore, users need to adapt their index finger force to grab objects of different virtual weights. Dynamic and continuous adjustment is enabled by a spring mechanism inside the casing of an HTC Vive controller. In two user studies, we explored the effect on weight perception and found large differences between participants for sensing change in trigger resistance and thus for discriminating virtual weights. The variations were easily distinguished and associated with weight by some participants while others did not notice them at all. We discuss possible limitations, confounding factors, how to overcome them in future research and the pros and cons of this novel technology.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
Despite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
OSC data
(2020)
Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays
(2020)
Gone But Not Forgotten: Evaluating Performance and Scalability of Real-Time Mesoscopic Agents
(2020)
Telepresence robots allow people to participate in remote spaces, yet they can be difficult to manoeuvre with people and obstacles around. We designed a haptic-feedback system called “FeetBack," which users place their feet in when driving a telepresence robot. When the robot approaches people or obstacles, haptic proximity and collision feedback are provided on the respective sides of the feet, helping inform users about events that are hard to notice through the robot’s camera views. We conducted two studies: one to explore the usage of FeetBack in virtual environments, another focused on real environments.We found that FeetBack can increase spatial presence in simple virtual environments. Users valued the feedback to adjust their behaviour in both types of environments, though it was sometimes too frequent or unneeded for certain situations after a period of time. These results point to the value of foot-based haptic feedback for telepresence robot systems, while also the need to design context-sensitive haptic feedback.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Are There Extended Cognitive Improvements from Different Kinds of Acute Bouts of Physical Activity?
(2020)
Acute bouts of physical activity of at least moderate intensity have shown to enhance cognition in young as well as older adults. This effect has been observed for different kinds of activities such as aerobic or strength and coordination training. However, only few studies have directly compared these activities regarding their effectiveness. Further, most previous studies have mainly focused on inhibition and have not examined other important core executive functions (i.e., updating, switching) which are essential for our behavior in daily life (e.g., staying focused, resisting temptations, thinking before acting), as well. Therefore, this study aimed to directly compare two kinds of activities, aerobic and coordinative, and examine how they might affect executive functions (i.e., inhibition, updating, and switching) in a test-retest protocol. It is interesting for practical implications, as coordinative exercises, for example, require little space and would be preferable in settings such as an office or a classroom. Furthermore, we designed our experiment in such a way that learning effects were controlled. Then, we tested the influence of acute bouts of physical activity on the executive functioning in both young and older adults (young 16–22 years, old 65–80 years). Overall, we found no differences between aerobic and coordinative activities and, in fact, benefits from physical activities occurred only in the updating tasks in young adults. Additionally, we also showed some learning effects that might influence the results. Thus, it is important to control cognitive tests for learning effects in test-retest studies as well as to analyze effects from physical activity on a construct level of executive functions.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.