006 Spezielle Computerverfahren
Refine
H-BRS Bibliography
- yes (29)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (29) (remove)
Document Type
- Article (14)
- Conference Object (11)
- Doctoral Thesis (2)
- Part of a Book (1)
- Preprint (1)
Year of publication
Language
- English (29)
Keywords
- haptics (3)
- 3D user interface (2)
- Augmented Reality (2)
- Ray tracing (2)
- Virtual Reality (2)
- guidance (2)
- virtual reality (2)
- 3D navigation (1)
- AR (1)
- Camera selection (1)
- Camera view analysis (1)
- Codes (1)
- Computergrafik (1)
- Cybersickness (1)
- Data structures (1)
- Demonstration-based training (1)
- Entropy (1)
- Feedback (1)
- Games and Simulations for Learning (1)
- Geometry (1)
- HDBR (1)
- Hardware (1)
- Higher education (1)
- Human factors (1)
- Human orientation perception (1)
- Hyperspectral image (1)
- Instruction design (1)
- Locomotion (1)
- Motion Sickness (1)
- Multi-camera (1)
- Neural representations (1)
- Neuroscience (1)
- Perception (1)
- Perceptual Upright (1)
- Proximity (1)
- Psychology (1)
- Raman microscopy (1)
- Reasoning (1)
- Recommender systems (1)
- Serious Games (1)
- Survey (1)
- Three-dimensional displays (1)
- Topology (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- VR (1)
- View selection (1)
- Virtual Agents (1)
- Virtuelle Realität (1)
- Visuelle Wahrnehmung (1)
- adaptive trigger (1)
- audio-tactile feedback (1)
- authoring tools (1)
- brightfield microscopy (1)
- collision (1)
- component analyses (1)
- computer vision (1)
- controller design (1)
- depth perception (1)
- head down bed rest (1)
- image fusion (1)
- interactive computer graphics (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- mixed reality (1)
- multisensory (1)
- navigational search (1)
- pansharpening (1)
- path tracing (1)
- psychophysics (1)
- real-time (1)
- sensory perception (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- subjective visual vertical (1)
- vibration (1)
- weight perception (1)
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
BACKGROUND: Humans demonstrate many physiological changes in microgravity for which long-duration head down bed rest (HDBR) is a reliable analog. However, information on how HDBR affects sensory processing is lacking.
OBJECTIVE: We previously showed [25] that microgravity alters the weighting applied to visual cues in determining the perceptual upright (PU), an effect that lasts long after return. Does long-duration HDBR have comparable effects?
METHODS: We assessed static spatial orientation using the luminous line test (subjective visual vertical, SVV) and the oriented character recognition test (PU) before, during and after 21 days of 6° HDBR in 10 participants. Methods were essentially identical as previously used in orbit [25].
RESULTS: Overall, HDBR had no effect on the reliance on visual relative to body cues in determining the PU. However, when considering the three critical time points (pre-bed rest, end of bed rest, and 14 days post-bed rest) there was a significant decrease in reliance on visual relative to body cues, as found in microgravity. The ratio had an average time constant of 7.28 days and returned to pre-bed-rest levels within 14 days. The SVV was unaffected.
CONCLUSIONS: We conclude that bed rest can be a useful analog for the study of the perception of static self-orientation during long-term exposure to microgravity. More detailed work on the precise time course of our effects is needed in both bed rest and microgravity conditions.
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.