Refine
H-BRS Bibliography
- yes (41) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (41) (remove)
Document Type
- Conference Object (28)
- Article (10)
- Report (2)
- Part of a Book (1)
Year of publication
- 2018 (41) (remove)
Has Fulltext
- no (41) (remove)
Keywords
- Surrogate Modeling (2)
- Virtual Reality (2)
- 3D User Interface (1)
- 3D user interface (1)
- Alternatives (1)
- Available Bandwidth (1)
- B2T (1)
- Bandwidth Estimation (1)
- Bayesian Network (1)
- Bayessches Netz (1)
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
Entering the work envelope of an industrial robot can lead to severe injury from collisions with moving parts of the system. Conventional safety mechanisms therefore mostly restrict access to the robot using physical barriers such as walls and fences or non-contact protective devices including light curtains and laser scanners. As none of these mechanisms applies to human-robot-collaboration (HRC), a concept in which human and machine complement one another by working hand in hand, there is a rising need for safe and reliable detection of human body parts amidst background clutter. For this application camera-based systems are typically well suited. Still, safety concerns remain, owing to possible detection failures caused by environmental occlusion, extraneous light or other adverse imaging conditions. While ultrasonic proximity sensing can provide physical diversity to the system, it does not yet allow to reliably distinguish relevant objects from background objects.This work investigates a new approach to detecting relevant objects and human body parts based on acoustic holography. The approach is experimentally validated using a low-cost application-specific ultrasonic sensor system created from micro-electromechanical systems (MEMS). The presented results show that this system far outperforms conventional proximity sensors in terms of lateral imaging resolution and thus allows for more intelligent muting processes without compromising the safety of people working close to the robot. Based upon this work, a next step could be the development of a multimodal sensor systems to safeguard workers who collaborate with robots using the described ultrasonic sensor system.
Almost unnoticed by the e-learning community, the underlying technology of the WWW is undergoing massive technological changes on all levels these days. In this paper we draw the attention to the emerging game changer and discuss the consequences for online learning. In our e-learning project "Work & Study", funded by the German Federal Ministry of Education and Research, we have experimented with several new technological approaches such as Mobile First, Responsive Design, Mobile Apps, Web Components, Client-side Components, Progressive Web Apps, Course Apps, e-books, and web sockets for real time collaboration and report about the results and consequences for online learning practice. The modular web is emerging where e-learning units are composed from and delivered by universally embeddable web components.
In this paper we propose an architecture to integrate classical planning and real autonomous mobile robots. We start by providing with a high level description of all necessary components to set the goals, generate plans and execute them on real robots and monitor the outcome of their actions. At the core of our method and to deal with execution issues we code the agent actions with automatas. We prove the flexibility of the system by testing on two different domains: industrial (Basic Transportation Test) and domestic (General Purpose Service Robot) in the context of the international RoboCup competition. Additionally we benchmark the scalability of the planning system in two domains on a set of planning problems with increasing complexity. The proposed framework is open source1 and can be easily extended.
In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this article is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.
General Chair Message
(2018)
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.