Refine
H-BRS Bibliography
- yes (313)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (313) (remove)
Document Type
- Conference Object (210)
- Article (65)
- Report (14)
- Part of a Book (6)
- Conference Proceedings (5)
- Book (monograph, edited volume) (4)
- Doctoral Thesis (4)
- Part of Periodical (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Virtuelle Realität (4)
- Augmented reality (3)
- Hyperspectral image (3)
- Image Processing (3)
- Perceptual Upright (3)
- Ray Tracing (3)
- Ray tracing (3)
- Virtual reality (3)
- Visualization (3)
- education (3)
- foveated rendering (3)
- guidance (3)
- low power (3)
- 3D user interfaces (2)
- Algorithms (2)
- CUDA (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- EEG (2)
- Einführung (2)
- Elektronik (2)
- Eye Tracking (2)
- Force field (2)
- Forschungsbericht (2)
- Fuzzy logic (2)
- Garbage collection (2)
- Gravitation (2)
- Human factors (2)
- Intelligent virtual agents (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Performance (2)
- Raman microscopy (2)
- Raumwahrnehmung (2)
- Remote laboratory (2)
- Serious Games (2)
- Three-dimensional displays (2)
- Unity (2)
- VR (2)
- computer vision (2)
- digital design (2)
- edutainment (2)
- energy awareness (2)
- eye-tracking (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interaction (2)
- interface design (2)
- leaning (2)
- machine vision (2)
- microcontroller (2)
- multisensory cues (2)
- navigation (2)
- pansharpening (2)
- peripheral vision (2)
- remote lab (2)
- serious games (2)
- software engineering (2)
- spatial updating (2)
- technological literacy (2)
- user study (2)
- vibration (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D shape (1)
- AMBER (1)
- AR (1)
- Active locomotion (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Agents (1)
- Alkane (1)
- Aufrecht (1)
- BLOB Detection (1)
- Bachelor-Studiengang (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Camera selection (1)
- Camera view analysis (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Chemical imaging (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Cognition (1)
- Cognitive informatics (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Created Gravity (1)
- Cross-sensitivity (1)
- Current measurement (1)
- CyberGlove (1)
- Cybersickness (1)
- Data structures (1)
- Datalog (1)
- Demonstration-based training (1)
- Digital Storytelling (1)
- Digitaltechnik (1)
- Displacement (1)
- Docker (1)
- ERP (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Educational methods (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Emotion (1)
- Entropy (1)
- Escape analysis (1)
- Evaluation board (1)
- Event detection (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Feedback (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Foveated rendering (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Geometry (1)
- Global Illumination (1)
- Global illumination (1)
- Grailog (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HDBR (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- High-resolution displays (1)
- Higher education (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschulehre (1)
- Human Factors (1)
- Human computer interaction (1)
- Human orientation perception (1)
- Human-Computer Interaction (1)
- Hydrocarbon (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Information interaction (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantiation (1)
- Instruction design (1)
- Interaction (1)
- Interaction devices (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Interventionstudie (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Laboratories (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Lehr-Lernpsychologie (1)
- Lehrbuch (1)
- Lehre (1)
- Lernen (1)
- Lernumgebung (1)
- Lighting simulation (1)
- Locomotion (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- MP2.5 (1)
- Main Memory (1)
- Management (1)
- Master-Studiengang (1)
- Materialwissenschaften (1)
- Measurement (1)
- Memory management (1)
- Methodologies (1)
- Mikrogravitation (1)
- Modular software packages (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- Multiuser (1)
- Musical Performance (1)
- N200 (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Neural representations (1)
- Neuroscience (1)
- Numerical optimization (1)
- OER (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Orientierung (1)
- Outer Space Research (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Presence (1)
- Pro-MINT-us (1)
- Programming (1)
- Proximity (1)
- Psychology (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- Radfahren (1)
- Radiance caching (1)
- Radix Sort (1)
- Raumfahrt (1)
- Ray Casting (1)
- Reasoning (1)
- Recommender systems (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Rendering (1)
- Reversible Logic Synthesis (1)
- Robotics (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Scalable Vector Graphic (1)
- School experiments (1)
- Second Life (1)
- Self-motion perception (1)
- Sense of presence (1)
- Shadow detection (1)
- Smartphone (1)
- Social Virtual Reality (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Studienreform (1)
- Supervised classification (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Tracking (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- UI design (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- VR system design (1)
- VR-based systems (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual Computing (1)
- Visual perception (1)
- Visuelle Wahrnehmung (1)
- Volumenrendering (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Weltraumforschung (1)
- XML (1)
- XSLT (1)
- Young adults (1)
- Zentrifuge (1)
- accelerometer (1)
- adaptive agents (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- back-of-device interaction (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- collision (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- controller design (1)
- cooperative path planning (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- database (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electrochemical sensor (1)
- electronics (1)
- embedded systems (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- engineering education (1)
- engineering for non-engineers (1)
- evaluation board development (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- field programmable gate arrays (1)
- flying (1)
- fpga (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- gaze (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heat shrink tubes (1)
- heavy metal (1)
- human computer interaction (1)
- human-centric lighting (1)
- hydrocarbon (1)
- image processing (1)
- immersion (1)
- immersive Visualisierung (1)
- immersive systems (1)
- information display methods (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interaction techniques (1)
- interactive computer graphics (1)
- ion-selective electrodes (1)
- large-high-resolution displays (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- life-long learning (1)
- lifelong learning (1)
- linguistic variable (1)
- linguistic variables (1)
- lipid (1)
- locomotion (1)
- locomotion interface (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- mood (1)
- motion capture (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- neural networks (1)
- neuro-cognitive performance (1)
- non-engineering students (1)
- octane (1)
- optical tracking (1)
- path tracing (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance optimizations (1)
- peripheral visual field (1)
- photometry (1)
- physical activity (1)
- physical model immersive (1)
- posture analysis (1)
- prefrontal cortex (1)
- prehensile motions (1)
- programming (1)
- project management (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- remote-lab (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensory perception (1)
- short-term memory (1)
- simulator (1)
- situation awareness (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- story authoring (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- teleportation (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- tools for education (1)
- travel techniques (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- video lectures (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- weight perception (1)
- whole-body interface (1)
- workday breaks (1)
- workspace awareness (1)
- zooming interface (1)
- zooming interfaces (1)
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
Ausgangspunkt der im Folgenden vorgestellten Semesterstruktur war die Umstellung der vorhandenen Diplomstudiengänge auf den Bachelor-Abschluss. Am Fachbereich werden drei grundständige Studiengänge angeboten; die technischen Studiengänge Elektrotechnik und Maschinenbau sowie der interdisziplinäre Studiengang Technikjournalismus, der den Geistes- und Sozialwissenschaften zuzurechnen ist. Ebenfalls sind duale Studiengänge vorhanden, die an grundständige Studiengänge angelehnt sind.
Along with the success of the digitally revived stereoscopic cinema, other events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
In this contribution, we describe the activities and promotion programs installed at the Bonn-Rhein-Sieg University as an institution and at the Department of Computer Science respectively for increasing the total number of computer science students and in particular the female rate. We report about our experiences in addressing gender aspects in education and try to evaluate the outcome of our programs with respect to our equal rights for women strategy. We propose a closer look at mental self-theories enabled by E-portfolios to address also gender issues in Computer Science. Moreover, reasons are identified and discussed which may be responsible for the reduced interest in particular of female young adults to choose a computer science study program.
In diesem Beitrag wird der interaktive Volumenrenderer Volt für die NVIDIA CUDA Architektur vorgestellt. Die Beschleunigung wird durch das Ausnutzen der technischen Eigenschaften des CUDA Device, durch die Partitionierung des Algorithmus und durch die asynchrone Ausführung des CUDA Kernels erreicht. Parallelität wird auf dem Host, auf dem Device und zwischen Host und Device genutzt. Es wird dargestellt, wie die Berechnungen durch den gezielten Einsatz der Ressourcen effizient durchgeführt werden. Die Ergebnisse werden zurückkopiert, so dass der Kernel nicht auf dem zur Anzeige bestimmten Device ausgeführt werden muss. Synchronisation der CUDA Threads ist nicht notwendig.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
"Visual Computing" (VC) fasst als hochgradig aktuelles Forschungsgebiet verschiedene Bereiche der Informatik zusammen, denen gemeinsam ist, dass sie sich mit der Erzeugung und Auswertung visueller Signale befassen. Im Fachbereich Informatik der FH Bonn-Rhein-Sieg nimmt dieser Aspekt eine zentrale Rolle in Lehre und Forschung innerhalb des Studienschwerpunktes Medieninformatik ein. Drei wesentliche Bereiche des VC werden besonders in diversen Lehreinheiten und verschiedenen Projekten vermittelt: Computergrafik, Bildverarbeitung und Hypermedia-Anwendungen. Die Aktivitäten in diesen drei Bereichen fließen zusammen im Kontext immersiver virtueller Visualisierungsumgebungen.
Dieser Tagungsband enthält die Beiträge zum 12. Workshop zum Thema Virtuelle und Erweiterte Realität der Fachgruppe VR/AR der Gesellschaft für Informatik e.V. Der Workshop dient zum Informations- und Ideenaustausch deutschsprachigen WissenschaftlerInnen, zusätzlich bietet der Workshop den idealen Rahmen aktuelle Ergebnisse und Vorhaben aus Forschung und Entwicklung einem fachkundigen Publikum zur Diskussion zu stellen. Insbesondere wollen wir auch jungen Nachwuchswissenschaftlern die Möglichkeit geben, ihre Arbeiten zu präsentieren.
Dies ist der Tagungsband zum elften aus einer Reihe erfolgreicher Workshops zum Thema Virtuelle und Erweiterte Realität, die von der Fachgruppe VR/AR der Gesellschaft für Informatik e.V. ins Leben gerufen wurde. Als etablierte Plattform für den Informations- und Ideenaustausch der deutschsprachigen VR/AR-Szene bietet der Workshop den idealen Rahmen, aktuelle Ergebnisse und Vorhaben aus Forschung und Entwicklung im Kreise eines fachkundigen Publikums zur Diskussion zu stellen. Insbesondere wollen wir auch jungen Nachwuchswissenschaftlern die Möglichkeit geben, ihre Arbeiten zu präsentieren.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Using an Embroidery Machine to Achieve a Deeper Understanding of Electromechanical Applications
(2013)
The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.