Prof. Dr. André Hinkenjann
Refine
H-BRS Bibliography
- yes (70)
Departments, institutes and facilities
Document Type
- Conference Object (70) (remove)
Year of publication
Has Fulltext
- no (70)
Keywords
- Virtual Reality (3)
- 3D user interface (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Large, high-resolution displays (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
- 3D user interfaces (1)
- Algorithms (1)
- Augmented Reality (1)
- Bound Volume Hierarchy (1)
- Cell Processor (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computing methodologies (1)
- Containerization (1)
- Docker (1)
- Echtzeit (1)
- Eye Tracking (1)
- Fixed spatial data (1)
- Garbage collection (1)
- Gaze Depth Estimation (1)
- Global Illumination (1)
- Group Behavior (1)
- Group behavior (1)
- Groupware (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Higher education (1)
- Human centered computing (1)
- Image Processing (1)
- Image-based rendering (1)
- Information interaction (1)
- Java virtual machine (1)
- Lighting simulation (1)
- Main Memory (1)
- Mixed (1)
- Motion Capture (1)
- Musical Performance (1)
- Navigation interface (1)
- Performance (1)
- Radix Sort (1)
- Ray Casting (1)
- Ray Tracing (1)
- Ray tracing (1)
- Reflectance modeling (1)
- Render Cache (1)
- Split Axis (1)
- Survey (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Unity (1)
- User Roles (1)
- User Study (1)
- VR (1)
- VR system design (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Volumenrendering (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- bass-shaker (1)
- co-located collaboration (1)
- collaboration (1)
- component analyses (1)
- eye-tracking (1)
- foveated rendering (1)
- gaming (1)
- hand guidance (1)
- haptics (1)
- human-centric lighting (1)
- immersion (1)
- interaction (1)
- interaction techniques (1)
- leaning (1)
- mobile projection (1)
- multi-layer display (1)
- multisensory cues (1)
- performance optimizations (1)
- peripheral vision (1)
- proxemics (1)
- scene element representation (1)
- short-term memory (1)
- software engineering (1)
- spectral rendering (1)
- stereoscopic vision (1)
- surface textures (1)
- tiled displays (1)
- user engagement (1)
- user study (1)
- vibration (1)
- virtual environment framework (1)
- virtual environments (1)
- virtual reality (1)
- visualisation (1)
- visuohaptic feedback (1)
- whole-body interface (1)
- zooming interface (1)
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.
This presentation gives an overview of current research in the area of high quality rendering and visualization at the Institute of Visual Computing (IVC). Our research facility has some unique software and hardware installations of which we will describe a large, ultra- high resolution (72 megapixel) video wall in this presentation.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
Phase Space Rendering
(2007)
Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms
(2013)
A recent trend in interactive environments are large, ultra high resolution displays (LUHRDs). Compared to other large interactive installations, like the CAVE tm , LUHRDs are usually flat or (slightly) curved and have a significantly higher resolution, offering new research and application opportunities.
This tutorial provides information for researchers and engineers who plan to install and use a large ultra-high resolution display. We will give detailed information on the hardware and software of recently created and established installations and will show the variety of possible approaches. Also, we will talk about rendering software, rendering techniques and interaction for LUHRDs, as well as applications.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.
In diesem Beitrag wird der interaktive Volumenrenderer Volt für die NVIDIA CUDA Architektur vorgestellt. Die Beschleunigung wird durch das Ausnutzen der technischen Eigenschaften des CUDA Device, durch die Partitionierung des Algorithmus und durch die asynchrone Ausführung des CUDA Kernels erreicht. Parallelität wird auf dem Host, auf dem Device und zwischen Host und Device genutzt. Es wird dargestellt, wie die Berechnungen durch den gezielten Einsatz der Ressourcen effizient durchgeführt werden. Die Ergebnisse werden zurückkopiert, so dass der Kernel nicht auf dem zur Anzeige bestimmten Device ausgeführt werden muss. Synchronisation der CUDA Threads ist nicht notwendig.
Human beings spend much time under the influence of artificial lighting. Often, it is beneficial to adapt lighting to the task, as well as the user’s mental and physical constitution and well-being. This formulates new requirements for lighting - human-centric lighting - and drives a need for new light control methods in interior spaces. In this paper we present a holistic system that provides a novel approach to human-centric lighting by introducing simulation methods into interactive light control, to adapt the lighting based on the user's needs. We look at a simulation and evaluation platform that uses interactive stochastic spectral rendering methods to simulate light sources, allowing for their interactive adjustment and adaption.
Application performance improvements through VM parameter modification after runtime analysis
(2013)
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces ("human joystick") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.