Refine
H-BRS Bibliography
- yes (41) (remove)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (41) (remove)
Document Type
- Article (30)
- Report (7)
- Conference Object (2)
- Part of Periodical (2)
Year of publication
Has Fulltext
- yes (41) (remove)
Keywords
- Perceptual Upright (3)
- Virtuelle Realität (3)
- FPGA (2)
- Forschungsbericht (2)
- Gravitation (2)
- Perception (2)
- Raumwahrnehmung (2)
- Ray Tracing (2)
- Virtual Reality (2)
- computer vision (2)
- haptics (2)
- virtual reality (2)
- 3D navigation (1)
- 3D user interface (1)
- AR (1)
- Adaptive Behavior (1)
- Agents (1)
- Aufrecht (1)
- Augmented Reality (1)
- Camera selection (1)
- Camera view analysis (1)
- Centrifugation (1)
- Centrifuge (1)
- Chemical imaging (1)
- Codes (1)
- Cognition (1)
- Created Gravity (1)
- CyberGlove (1)
- Data structures (1)
- Demonstration-based training (1)
- Ecosystem simulation (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Emotion (1)
- Entropy (1)
- Executive functions (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Feedback (1)
- Five Factor Model (1)
- Gefahrenprävention (1)
- Geometry (1)
- Group behavior analysis (1)
- HDBR (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschulehre (1)
- Human orientation perception (1)
- Immersive analytics (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantiation (1)
- Instruction design (1)
- Interaction devices (1)
- Interaktion (1)
- Interventionstudie (1)
- Künstliche Gravitation (1)
- Large display interaction (1)
- Lehr-Lernpsychologie (1)
- Lernen (1)
- Lernumgebung (1)
- Materialwissenschaften (1)
- Molecular rotation (1)
- Multi-camera (1)
- Multimodal hyperspectral data (1)
- Neuroscience (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Outer Space Research (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Poisson Disc Distribution (1)
- Pro-MINT-us (1)
- Proximity (1)
- Psychology (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- Radfahren (1)
- Ray tracing (1)
- Recommender systems (1)
- Rendering (1)
- School experiments (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Supervised classification (1)
- Terrain rendering (1)
- Three-dimensional displays (1)
- Topology (1)
- Unity (1)
- VR (1)
- VR-based systems (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- View selection (1)
- Visual Computing (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Weltraumforschung (1)
- Young adults (1)
- Zentrifuge (1)
- accelerometer (1)
- adaptive trigger (1)
- camera (1)
- collision (1)
- controller design (1)
- data analysis (1)
- data glove (1)
- database (1)
- education (1)
- energy awareness (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- foveated rendering (1)
- gaze (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- head down bed rest (1)
- immersive Visualisierung (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interaction (1)
- interactive computer graphics (1)
- large-high-resolution displays (1)
- leaning-based interfaces (1)
- locomotion interface (1)
- low power (1)
- mixed reality (1)
- motion capture (1)
- navigational search (1)
- optical tracking (1)
- perceived quality (1)
- perception of upright (1)
- posture analysis (1)
- prehensile motions (1)
- psychophysics (1)
- rapid prototyping tool (1)
- region of interest (1)
- robotic arm (1)
- robotic evaluation (1)
- sensemaking (1)
- software engineering (1)
- space flight analog (1)
- spatial orientation (1)
- spatial updating (1)
- spinal posture (1)
- subjective visual vertical (1)
- tools for education (1)
- user input (1)
- user interaction (1)
- user study (1)
- vestibular system (1)
- vibration (1)
- virtuelle Umgebungen (1)
- wearable sensor (1)
- weight perception (1)
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
An internal model of self-motion provides a fundamental basis for action in our daily lives, yet little is known about its development. The ability to control self-motion develops in youth and often deteriorates with advanced age. Self-motion generates relative motion between the viewer and the environment. Thus, the smoothness of the visual motion created will vary as control improves. Here, we study the influence of the smoothness of visually simulated self-motion on an observer's ability to judge how far they have travelled over a wide range of ages. Previous studies were typically highly controlled and concentrated on university students. But are such populations representative of the general public? And are there developmental and sex effects? Here, estimates of distance travelled (visual odometry) during visually induced self-motion were obtained from 466 participants drawn from visitors to a public science museum. Participants were presented with visual motion that simulated forward linear self-motion through a field of lollipops using a head-mounted virtual reality display. They judged the distance of their simulated motion by indicating when they had reached the position of a previously presented target. The simulated visual motion was presented with or without horizontal or vertical sinusoidal jitter. Participants' responses indicated that they felt they travelled further in the presence of vertical jitter. The effectiveness of the display increased with age over all jitter conditions. The estimated time for participants to feel that they had started to move also increased slightly with age. There were no differences between the sexes. These results suggest that age should be taken into account when generating motion in a virtual reality environment. Citizen science studies like this can provide a unique and valuable insight into perceptual processes in a truly representative sample of people.
An Universitäten und Fachhochschulen ist die Mathematik-Ausbildung eines der Nadelöhre für angehende Ingenieurinnen und Ingenieure. Viele Studierende der Ingenieurwissenschaften scheitern in den ersten Studiensemestern an den Anforderungen der Mathematik. Lehrende, Fach- und Hochschuldidaktiker/innen und zunehmend auch Fachvertretungen und Verbände stellen sich die Frage, was an den Fakultäten und Fachbereichen getan werden kann, damit Studierende ihre mathematischen Fähigkeiten vergrößern und den anspruchsvollen Studienweg zur Ingenieurin oder zum Ingenieur meistern können.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
Are There Extended Cognitive Improvements from Different Kinds of Acute Bouts of Physical Activity?
(2020)
Acute bouts of physical activity of at least moderate intensity have shown to enhance cognition in young as well as older adults. This effect has been observed for different kinds of activities such as aerobic or strength and coordination training. However, only few studies have directly compared these activities regarding their effectiveness. Further, most previous studies have mainly focused on inhibition and have not examined other important core executive functions (i.e., updating, switching) which are essential for our behavior in daily life (e.g., staying focused, resisting temptations, thinking before acting), as well. Therefore, this study aimed to directly compare two kinds of activities, aerobic and coordinative, and examine how they might affect executive functions (i.e., inhibition, updating, and switching) in a test-retest protocol. It is interesting for practical implications, as coordinative exercises, for example, require little space and would be preferable in settings such as an office or a classroom. Furthermore, we designed our experiment in such a way that learning effects were controlled. Then, we tested the influence of acute bouts of physical activity on the executive functioning in both young and older adults (young 16–22 years, old 65–80 years). Overall, we found no differences between aerobic and coordinative activities and, in fact, benefits from physical activities occurred only in the updating tasks in young adults. Additionally, we also showed some learning effects that might influence the results. Thus, it is important to control cognitive tests for learning effects in test-retest studies as well as to analyze effects from physical activity on a construct level of executive functions.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.