Refine
H-BRS Bibliography
- yes (52)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (52) (remove)
Document Type
- Conference Object (30)
- Article (20)
- Book (monograph, edited volume) (1)
- Contribution to a Periodical (1)
Year of publication
Keywords
- 3D user interface (7)
- virtual reality (6)
- haptics (4)
- 3D user interfaces (2)
- Augmented Reality (2)
- Augmented reality (2)
- Perception (2)
- Virtual reality (2)
- guidance (2)
- human factors (2)
- interface design (2)
- leaning (2)
- multisensory cues (2)
- navigation (2)
- peripheral vision (2)
- spatial updating (2)
- vibration (2)
- 3D User Interface (1)
- 3D interfaces (1)
- 3D navigation (1)
- Active locomotion (1)
- Adaptive Control (1)
- Camera selection (1)
- Camera view analysis (1)
- Challenges (1)
- Co-located work (1)
- Cognitive informatics (1)
- Cybersickness (1)
- Demonstration-based training (1)
- Entropy (1)
- Feedback (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Games (1)
- Group behavior analysis (1)
- Hand Guidance (1)
- Head Mounted Display (1)
- Human computer interaction (1)
- Human factors (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instruction design (1)
- Large display interaction (1)
- Large, high-resolution displays (1)
- Lighting simulation (1)
- Locomotion (1)
- Methodologies (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multilayer interaction (1)
- Multisensory cues (1)
- Navigation (1)
- Navigation interface (1)
- Performance (1)
- Presence (1)
- Proximity (1)
- Recommender systems (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Touchscreen interaction (1)
- Travel Techniques (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- VR (1)
- View selection (1)
- Virtual Reality (1)
- Visualization (1)
- adaptive trigger (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- back-of-device interaction (1)
- bass-shaker (1)
- body-centric cues (1)
- collaboration (1)
- collision (1)
- controller design (1)
- cybersickness (1)
- depth perception (1)
- embodied interfaces (1)
- flying (1)
- full-body interface (1)
- gaming (1)
- hand guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- interaction techniques (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- locomotion (1)
- locomotion interface (1)
- mobile applications (1)
- mobile projection (1)
- motion cueing (1)
- motion sickness (1)
- multi-layer display (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- pen interaction (1)
- peripheral visual field (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- robotics (1)
- see-through head-mounted displays (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- short-term memory (1)
- situation awareness (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- teleportation (1)
- telepresence (1)
- travel techniques (1)
- user engagement (1)
- user study (1)
- vection (1)
- view management (1)
- virtual environments (1)
- virtual locomotion (1)
- visuohaptic feedback (1)
- weight perception (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.