Refine
H-BRS Bibliography
- yes (30)
Departments, institutes and facilities
Document Type
- Conference Object (23)
- Article (6)
- Doctoral Thesis (1)
Year of publication
Keywords
- Virtual Reality (3)
- 3D user interface (2)
- Augmented Reality (2)
- Higher education (2)
- Survey (2)
- guidance (2)
- haptics (2)
- 3D User Interface (1)
- Focus plus context (1)
- Hand Guidance (1)
Das Interesse an Virtual Reality (VR) für die Hochschullehre steigt aktuell vermehrt durch die Möglichkeit, logistisch schwierige Aufgaben abzubilden sowie aufgrund positiver Ergebnisse aus Wirksamkeitsstudien. Gleichzeitig fehlt es jedoch an Studien, die immersive VR-Umgebungen, nicht-immersive Desktop-Umgebungen und konventionelle Lernmaterialien gegenüberstellen und lehr-lernmethodische Aspekte evaluieren. Aus diesem Grund beschäftigt sich dieser Beitrag mit der Konzeption und Realisierung einer Lernumgebung für die Hochschullehre, die sowohl mit einem Head Mounted Display (HMD) als auch mittels Desktops genutzt werden kann, sowie deren Evaluation anhand eines experimentellen Gruppendesigns. Die Lernumgebung wurde auf Basis einer eigens entwickelten Softwareplattform erstellt und die Wirksamkeit mithilfe von zwei Experimentalgruppen – VR vs. Desktop-Umgebung – und einer Kontrollgruppe evaluiert und verglichen. In einer Pilotstudie konnten sowohl qualitativ als auch quantitativ positive Einschätzungen der Usability der Lernumgebung in beiden Experimentalgruppen herausgestellt werden. Darüber hinaus zeigten sich positive Effekte auf die kognitive und affektive Wirkung der Lernumgebung im Vergleich zu konventionellen Lernmaterialien. Unterschiede zwischen der Nutzung als VR- oder Desktop-Umgebung zeigen sich auf kognitiver und affektiver Ebene jedoch kaum. Die Analyse von Log-Daten deutet allerdings auf Unterschiede im Lern- und Explorationsverhalten hin.
Todays Virtual Environment frameworks use scene graphs to represent virtual worlds. We believe that this is a proper technical approach, but a VE framework should try to model its application area as accurate as possible. Therefore a scene graph is not the best way to represent a virtual world. In this paper we present an easily extensible model to describe entities in the virtual world. Further on we show how this model drives the design of our VE framework and how it is integrated.
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
In contrast to projection-based systems, large, high resolution multi-display systems offer a high pixel density on a large visualization area. This enables users to step up to the displays and see a small but highly detailed area. If the users move back a few steps they don't perceive details at pixel level but will instead get an overview of the whole visualization. Rendering techniques for design evaluation and review or for visualizing large volume data (e.g. Big Data applications) often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates.
A view direction based (VDB) rendering technique renders the user's central field of view in high quality whereas the surrounding is rendered with a level-of-detail approach depending on the distance to the user's central field of view. This approach mimics the physiology of the human eye and conserves the advantage of highly detailed information when standing close to the multi-display system as well as the general overview of the whole scene. In this paper we propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.