Prof. Dr. André Hinkenjann
Refine
H-BRS Bibliography
- yes (70)
Departments, institutes and facilities
Document Type
- Conference Object (70) (remove)
Year of publication
Has Fulltext
- no (70)
Keywords
- Virtual Reality (3)
- 3D user interface (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Large, high-resolution displays (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
- 3D user interfaces (1)
- Algorithms (1)
- Augmented Reality (1)
- Bound Volume Hierarchy (1)
- Cell Processor (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computing methodologies (1)
- Containerization (1)
- Docker (1)
- Echtzeit (1)
- Eye Tracking (1)
- Fixed spatial data (1)
- Garbage collection (1)
- Gaze Depth Estimation (1)
- Global Illumination (1)
- Group Behavior (1)
- Group behavior (1)
- Groupware (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Higher education (1)
- Human centered computing (1)
- Image Processing (1)
- Image-based rendering (1)
- Information interaction (1)
- Java virtual machine (1)
- Lighting simulation (1)
- Main Memory (1)
- Mixed (1)
- Motion Capture (1)
- Musical Performance (1)
- Navigation interface (1)
- Performance (1)
- Radix Sort (1)
- Ray Casting (1)
- Ray Tracing (1)
- Ray tracing (1)
- Reflectance modeling (1)
- Render Cache (1)
- Split Axis (1)
- Survey (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Unity (1)
- User Roles (1)
- User Study (1)
- VR (1)
- VR system design (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Volumenrendering (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- bass-shaker (1)
- co-located collaboration (1)
- collaboration (1)
- component analyses (1)
- eye-tracking (1)
- foveated rendering (1)
- gaming (1)
- hand guidance (1)
- haptics (1)
- human-centric lighting (1)
- immersion (1)
- interaction (1)
- interaction techniques (1)
- leaning (1)
- mobile projection (1)
- multi-layer display (1)
- multisensory cues (1)
- performance optimizations (1)
- peripheral vision (1)
- proxemics (1)
- scene element representation (1)
- short-term memory (1)
- software engineering (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- tiled displays (1)
- user engagement (1)
- user study (1)
- vibration (1)
- virtual environment framework (1)
- virtual environments (1)
- virtual reality (1)
- visualisation (1)
- visuohaptic feedback (1)
- whole-body interface (1)
- zooming interface (1)
In diesem Beitrag wird der interaktive Volumenrenderer Volt für die NVIDIA CUDA Architektur vorgestellt. Die Beschleunigung wird durch das Ausnutzen der technischen Eigenschaften des CUDA Device, durch die Partitionierung des Algorithmus und durch die asynchrone Ausführung des CUDA Kernels erreicht. Parallelität wird auf dem Host, auf dem Device und zwischen Host und Device genutzt. Es wird dargestellt, wie die Berechnungen durch den gezielten Einsatz der Ressourcen effizient durchgeführt werden. Die Ergebnisse werden zurückkopiert, so dass der Kernel nicht auf dem zur Anzeige bestimmten Device ausgeführt werden muss. Synchronisation der CUDA Threads ist nicht notwendig.
Large, high-resolution displays are highly suitable for creation of digital environments for co-located collaborative task solving. Yet, placing multiple users in a shared environment may increase the risk of interferences, thus causing mental discomfort and decreasing efficiency of the team. To mitigate interferences coordination strategies and techniques were introduced. However, in a mixed-focus collaboration scenarios users switch now and again between loosely and tightly collaboration, therefore different coordination techniques might be required depending on the current collaboration state of team members. For that, systems have to be able to recognize collaboration states as well as transitions between them to ensure a proper adjustment of the coordination strategy. Previous studies on group behavior during collaboration in front of large displays investigated solely collaborative coupling states, not transitions between them though. To address this gap, we conducted a study with 12 participant dyads in front of a tiled display and let them solve two tasks in two different conditions (focus and overview). We looked into group dynamics and categorized transitions by means of changes in proximity, verbal communication, visual attention, visual interface, and gestures. The findings can be valuable for user interface design and development of group behavior models.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
Supported by their large size and high resolution, display walls suit well for different collaboration types. However, in order to foster instead of impede collaboration processes, interaction techniques need to be carefully designed, taking into regard the possibilities and limitations of the display size, and their effects on human perception and performance. In this paper we investigate the impact of visual distractors (which, for instance, might be caused by other collaborators' input) in peripheral vision on short-term memory and attention. The distractors occur frequently when multiple users collaborate in large wall display systems and may draw attention away from the main task, as such potentially affecting performance and cognitive load. Yet, the effect of these distractors is hardly understood. Gaining a better understanding thus may provide valuable input for designing more effective user interfaces. In this article, we report on two interrelated studies that investigated the effect of distractors. Depending on when the distractor is inserted in the task performance sequence, as well as the location of the distractor, user performance can be disturbed: we will show that distractors may not affect short term memory, but do have an effect on attention. We will closely look into the effects, and identify future directions to design more effective interfaces.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
In dieser Arbeit wird eine Methode zur Darstellung und Generierung von natürlich wirkendem Bewuchs auf besonders großen Arealen und unter Berücksichtigung ökologischer Faktoren vorgestellt. Die Generierung und Visualisierung von Bewuchs ist aufgrund der Komplexität biologischer Systeme und des Detailreichtums von Pflanzenmodellen ein herausforderndes Gebiet der Computergrafik und ermöglicht es, den Realismus von Landschaftsvisualisierungen erheblich zu steigern. Aufbauend auf [DMS06] wird bei Silva der Bewuchs so generiert, dass die zur Darstellung benötigten Wang-Kacheln und die mit ihnen assoziierten Teilverteilungen wiederverwendet werden können. Dazu wird ein Verfahren vorgestellt, um Poisson Disk Verteilungen mit variablen Radien auf nahtlosen Wang-Kachelmengen ohne rechenintensive globale Optimierung zu erzeugen. Durch die Einbeziehung von Nachbarschaften und frei konfigurierbaren Generierungspipelines können beliebige abiotische und biotische Faktoren bei der Bewuchsgenerierung berücksichtigt werden. Die durch Silva auf Wang-Kacheln erzeugten Pflanzenverteilungen ermöglichen, die darauf aufgebauten beschleunigenden Datenstrukturen bei der Visualisierung wieder zu verwenden. Durch Multi-Level Instancing und eine Schachtelung von Kd-Bäumen ist eine Visualisierung von großen bewachsenen Arealen mit geringen Renderzeiten und geringem Memoryfootprint von Hunderten Quadratkilometern Größe möglich.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms
(2013)
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
This paper describes FGPA-based image combining for parallel graphics systems. The goal of our current work is to reduce network traffic and latency for increasing performance in parallel visualization systems. Initial data distribution is based on a common ethernet network whereas image combining and returning differs to traditional parallel rendering methods. Calculated sub-images are grabbed directly from the DVI-Ports for fast image compositing by a FPGA-based combiner.
Phase Space Rendering
(2007)
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces ("human joystick") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.
Human beings spend much time under the influence of artificial lighting. Often, it is beneficial to adapt lighting to the task, as well as the user’s mental and physical constitution and well-being. This formulates new requirements for lighting - human-centric lighting - and drives a need for new light control methods in interior spaces. In this paper we present a holistic system that provides a novel approach to human-centric lighting by introducing simulation methods into interactive light control, to adapt the lighting based on the user's needs. We look at a simulation and evaluation platform that uses interactive stochastic spectral rendering methods to simulate light sources, allowing for their interactive adjustment and adaption.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.