Prof. Dr. André Hinkenjann
Refine
Department, Institute
Document Type
- Conference Object (64) (remove)
Year of publication
Keywords
- 3D user interface (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Large, high-resolution displays (2)
- Virtual Reality (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
This paper presents groupware to study group behavior while conducting a creative task on large, high-resolution displays. Moreover, we present the results of a between-subjects study. In the study, 12 groups with two participants each prototyped a 2D level on a 7m x 2.5m large, high-resolution display using tablet-PCs for interaction. Six groups underwent a condition where group members had equal roles and interaction possibilities. Another six groups worked in a condition where group members had different roles: level designer and 2D artist. The results revealed that in the different roles condition, the participants worked significantly more tightly and created more assets. We could also detect some shortcomings for that configuration. We discuss the gained insights regarding system configuration, groupware interfaces, and groups behavior.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scene's depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eye's vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).
Large, high-resolution displays are highly suitable for creation of digital environments for co-located collaborative task solving. Yet, placing multiple users in a shared environment may increase the risk of interferences, thus causing mental discomfort and decreasing efficiency of the team. To mitigate interferences coordination strategies and techniques were introduced. However, in a mixed-focus collaboration scenarios users switch now and again between loosely and tightly collaboration, therefore different coordination techniques might be required depending on the current collaboration state of team members. For that, systems have to be able to recognize collaboration states as well as transitions between them to ensure a proper adjustment of the coordination strategy. Previous studies on group behavior during collaboration in front of large displays investigated solely collaborative coupling states, not transitions between them though. To address this gap, we conducted a study with 12 participant dyads in front of a tiled display and let them solve two tasks in two different conditions (focus and overview). We looked into group dynamics and categorized transitions by means of changes in proximity, verbal communication, visual attention, visual interface, and gestures. The findings can be valuable for user interface design and development of group behavior models.
In western societies a huge percentage of the population suffers from some kind of back pain at least once in their life. There are several approaches addressing back pain by postural modifications. Postural training and activity can be tracked by various wearable devices most of which are based on accelerometers. We present research on the accuracy of accelerometer-based posture measurements. To this end, we took simultaneous recordings using an optical motion capture system and a system consisting of five accelerometers in three different settings: On a test robot, in a template, and on actual human backs. We compare the accelerometer-based spine curve reconstruction against the motion capture data. Results show that tilt values from the accelerometers are captured highly accurate, and the spine curve reconstruction works well.
Large, high-resolution displays demonstrated their effectiveness in lab settings for cognitively demanding tasks in single user and collaborative scenarios. The effectiveness is mostly reached through inherent displays' properties - large display real estate and high resolution - that allow for visualization of complex datasets, and support of group work and embodied interaction. To raise users' efficiency, however, more sophisticated user support in the form of advanced user interfaces might be needed. For that we need profound understanding of how large, tiled displays impact users work and behavior. We need to extract behavioral patterns for different tasks and data types. This paper reports on study results of how users, while working collaboratively, process spatially fixed items on large, tiled displays. The results revealed a recurrent pattern showing that users prefer to process documents column wise rather than row wise or erratic.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
Development and rapid prototyping for large interactive environments like tiled-display walls pose many challenges. One is the heterogeneity of the various applications and libraries. A visual application tailored for a single monitor setup with a certain software environment is difficult to port and distribute to a multi-display, multi-PC setup. As a solution to this problem, we explore the potential of lightweight containerization techniques for distributed interactive applications. In particular, we present how the necessary runtime and build environments including libraries and drivers can be abstracted using the Docker framework. We demonstrate the packing of an existing single-machine GPU-enabled ray tracer inside a container to be used on tiled display walls. The performance measurements reveal that the containerization has a negligible impact on the system’s performance but allows for easy setup, integration, and distribution of complex applications.
Enhancing touch screen interfaces through non-visual cues has been shown to improve performance. In this paper we report on a novel system that explores the usage of a forcesensitive motion-platform enhanced tablet interface to improve multi-modal interaction based on visuo-haptic instead of tactile feedback. Extending mobile touch screen with force-sensitive haptic feedback has potential to enhance performance interacting with GUIs and to improve perception of understanding relations. A user study was performed to determine the perceived recognition of different 3D shapes and the perception of different heights. Furthermore, two application scenarios are proposed to explore our proposed visuo-haptic system. The studies show the positive stance towards the feedback, as well as the found limitations related to perception of feedback.
We present an analysis of eye tracking data produced during a quality-focused user study of our own foveated ray tracing method. Generally, foveated rendering serves the purpose of adapting actual rendering methods to a user’s gaze. This leads to performance improvements which also allow for the use of methods like ray tracing, which would be computationally too expensive otherwise, in fields like virtual reality (VR), where high rendering performance is important to achieve immersion, or fields like scientific and information visualization, where large amounts of data may hinder real-time rendering capabilities. We provide an overview of our rendering system itself as well as information about the data we collected during the user study, based on fixation tasks to be fulfilled during flights through virtual scenes displayed on a head-mounted display (HMD). We analyze the tracking data regarding its precision and take a closer look at the accuracy achieved by participants when focusing the fixation targets. This information is then put into context with the quality ratings given by the users, leading to a surprising relation between fixation accuracy and quality ratings.
Supported by their large size and high resolution, display walls suit well for different collaboration types. However, in order to foster instead of impede collaboration processes, interaction techniques need to be carefully designed, taking into regard the possibilities and limitations of the display size, and their effects on human perception and performance. In this paper we investigate the impact of visual distractors (which, for instance, might be caused by other collaborators' input) in peripheral vision on short-term memory and attention. The distractors occur frequently when multiple users collaborate in large wall display systems and may draw attention away from the main task, as such potentially affecting performance and cognitive load. Yet, the effect of these distractors is hardly understood. Gaining a better understanding thus may provide valuable input for designing more effective user interfaces. In this article, we report on two interrelated studies that investigated the effect of distractors. Depending on when the distractor is inserted in the task performance sequence, as well as the location of the distractor, user performance can be disturbed: we will show that distractors may not affect short term memory, but do have an effect on attention. We will closely look into the effects, and identify future directions to design more effective interfaces.
When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces ("human joystick") can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants' feet) could all enhance participants' sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants' self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.
The work at hand outlines a recording setup for capturing hand and finger movements of musicians. The focus is on a series of baseline experiments on the detectability of coloured markers under different lighting conditions. With the goal of capturing and recording hand and finger movements of musicians in mind, requirements for such a system and existing approaches are analysed and compared. The results of the experiments and the analysis of related work show that the envisioned setup is suited for the expected scenario.
Human beings spend much time under the influence of artificial lighting. Often, it is beneficial to adapt lighting to the task, as well as the user’s mental and physical constitution and well-being. This formulates new requirements for lighting - human-centric lighting - and drives a need for new light control methods in interior spaces. In this paper we present a holistic system that provides a novel approach to human-centric lighting by introducing simulation methods into interactive light control, to adapt the lighting based on the user's needs. We look at a simulation and evaluation platform that uses interactive stochastic spectral rendering methods to simulate light sources, allowing for their interactive adjustment and adaption.
This paper introduces a novel and efficient segmentation method designed for articulated hand motion. The method is based on a graph representation of temporal structures in human hand-object interaction. Along with the method for temporal segmentation we provide an extensive new database of hand motions. The experiments performed on this dataset show that our method is capable of a fully automatic hand motion segmentation which largely coincides with human user annotations.
There is a need for rapid prototyping tools for large, high-resolution displays (LHRDs) in both scientific and commercial domains. That is, the area of LHRDs is still poorly explored and possesses no established standards, thus developers have to experiment a lot with new interaction and visualization concepts. Therefore, a rapid prototyping tool for LHRDs has to undertake two functions: ease the process of application development, and make an application runnable on a broad range of LHRD setups. The latter comprises a challenge, since most LHRDs are driven by multiple compute nodes and require distributed applications.
We present a system which allows for guiding the image quality in global illumination (GI) methods by user-specified regions of interest (ROIs). This is done with either a tracked interaction device or a mouse-based method, making it possible to create a visualization with varying convergence rates throughout one image towards a GI solution. To achieve this, we introduce a scheduling approach based on Sparse Matrix Compression (SMC) for efficient generation and distribution of rendering tasks on the GPU that allows for altering the sampling density over the image plane. Moreover, we present a prototypical approach for filtering the newly, possibly sparse samples to a final image. Finally, we show how large-scale display systems can benefit from rendering with ROIs.
Position awareness in unknown and large indoor spaces represents a great advantage for people, everyday pedestrians have to search for specific places, products and services. In this work a positioning solution able to localize the user based on data measured with a mobile device is described and evaluated. The position estimate uses data from smartphone built-in sensors, WiFi (Wireless Fidelity) adapter and map information of the indoor environment (e.g. walls and obstacles). A probability map derived from statistical information of the users tracked location over a period of time in the test scenario is generated and embedded in a map graph, in order to correct and combine the position estimates under a Bayesian representation. PDR (Pedestrian Dead Reckoning), beacon-based Weighted Centroid position estimates, map information obtained from building OpenStreetMap XML representation and probability map users path density are combined using a Particle Filter and implemented in a smartphone application. Based on evaluations, this work verifies that the use of smartphone hardware components, map data and its semantic information represented in the form of a OpenStreetMap structure provide 2.48 meters average error after 1,700 travelled meters and a scalable indoor positioning solution. The Particle Filter algorithm used to combine various sources of information, its radio WiFi-based observation, probability particle weighting process and the mapping approach allowing the inclusion of new indoor environments knowledge show a promising approach for an extensible indoor navigation system.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
Rendering techniques for design evaluation and review or for visualizing large volume data often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates. A view direction based rendering technique renders the users central field of view in high quality whereas the surrounding is rendered with a level of detail approach depending on the distance to the users central field of view thus giving the opportunity to increase rendering efficiency. We propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
This presentation gives an overview of current research in the area of high quality rendering and visualization at the Institute of Visual Computing (IVC). Our research facility has some unique software and hardware installations of which we will describe a large, ultra- high resolution (72 megapixel) video wall in this presentation.
A recent trend in interactive environments are large, ultra high resolution displays (LUHRDs). Compared to other large interactive installations, like the CAVE tm , LUHRDs are usually flat or (slightly) curved and have a significantly higher resolution, offering new research and application opportunities.
This tutorial provides information for researchers and engineers who plan to install and use a large ultra-high resolution display. We will give detailed information on the hardware and software of recently created and established installations and will show the variety of possible approaches. Also, we will talk about rendering software, rendering techniques and interaction for LUHRDs, as well as applications.
We present basho, a light weight and easily extendable virtual environment (VE) framework. Key benefits of this framework are independence of the scene element representation and the rendering API. The main goal was to make VE applications flexible without the need to change them, not only by being independent from input and output devices. As an example, with basho it is possible to switch from local illumination models to ray tracing by just replacing the renderer. Or to replace the graphical representation of the scene elements without the need to change the application. Furthermore it is possible to mix rendering technologies within a scene. This paper emphasises on the abstraction of the scene element representation.
In this article, we report on a user study investigating the effects of multisensory cues on triggering the emotional response in immersive games. Yet, isolating the effect of a specific sensory cue on the emotional state is a difficult feat. The performed experiment is a first of a series that aims at producing usable guidelines that can be applied to reproducing similar emotional responses, as well as the methods to measure the effects. As such, we are interested in methodologies to both design effective stimuli, and assess the quality and effect thereof. We start with identifying main challenges and the followed methodology. Thereafter, we closely analyze the study results to address some of the challenges, and identify where the potential is for improving the induced stimuli (cause) and effect, as well as the analytical methods used to pinpoint the extent of the effect.
This paper introduces a novel zooming interface deploying a pico projector that, instead of a second visual display, leverages audioscapes for contextual information. The technique enhances current flashlight metaphor approaches, supporting flexible usage within the domain of spatial augmented reality to focus on object or environment-related details. Within a user study we focused on quantifying the projection limitations related to depiction of details through the pico projector and validated the interaction approach. The quantified results of the study correlate pixel density, detail and proximity, which can greatly aid to design more effective, legible zooming interfaces for pico projectors - the study can form an example testbed that can be applied well for testing aberrations with other projectors. Furthermore, users rated the zooming technique using audioscapes well, showing the validity of the approach. The studies form the foundation for extending our work by detailing out the audio-visual approach and looking more closely in the role of real-world features on interpreting projected content.
The simulation of global illumination for Virtual Reality (VR) applications is a challenging process. The main reason for this is the high computational complexity of the Monte Carlo integration which makes sufficient frame rates hard to achieve. It is even more challenging to adopt this process for large, high-resolution displays (LHRD), because the resolution of an image to be generated becomes huge compared to a single display. One possibility to decrease the time of image rendering without worsening image quality is to involve additional computational nodes. The process of image creation has to be split into multiple rendering-tasks which may be computed independently. The resulting image data has to be conveyed to the display nodes where it is combined and passed to corresponding output devices. In this paper we introduce an extended version of our software interface which allows to integrate a flexible distributed rendering approach into VR frameworks, thus enabling high-quality realistic image generation on LHRDs. The interface describes a software architecture which realizes the communication between manager, computational and display nodes including rendering subtasks and data distribution and allows for the implementation of different load-balancing methods.
Improving data acquisition techniques and rising computational power keep producing more and larger data sets that need to be analyzed. These data sets usually do not fit into a GPU's memory. To interactively visualize such data with direct volume rendering, sophisticated techniques for problem domain decomposition, memory management and rendering have to be used. The volume renderer Volt is used to show how CUDA is efficiently utilised to manage the volume data and a GPU's memory with the aim of low opacity volume renderings of large volumes at interactive frame rates.
In contrast to projection-based systems, large, high resolution multi-display systems offer a high pixel density on a large visualization area. This enables users to step up to the displays and see a small but highly detailed area. If the users move back a few steps they don't perceive details at pixel level but will instead get an overview of the whole visualization. Rendering techniques for design evaluation and review or for visualizing large volume data (e.g. Big Data applications) often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates.
A view direction based (VDB) rendering technique renders the user's central field of view in high quality whereas the surrounding is rendered with a level-of-detail approach depending on the distance to the user's central field of view. This approach mimics the physiology of the human eye and conserves the advantage of highly detailed information when standing close to the multi-display system as well as the general overview of the whole scene. In this paper we propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
Most Virtual Reality (VR) applications use rendering methods which implement local illumination models, simulating only direct interaction of light with 3D objects. They do not take into account the energy exchange between the objects themselves, making the resulting images look non-optimal. The main reason for this is the simulation of global illumination having a high computational complexity, decreasing the frame rate extremely. As a result this makes for example user interaction quite challenging. One way to decrease the time of image generation using rendering methods which implement global illumination models is to involve additional compute nodes in the process of image creation, distribute the rendering subtasks among these and then collate the results of the subtasks into a single image. Such a strategy is called distributed rendering. In this paper we introduce a software interface which gives a recommendation how the distributed rendering approach may be integrated into VR frameworks to achieve lower generation time of high quality, realistic images. The interface describes a client-server architecture which realizes the communication between visualization and compute nodes including data and rendering subtask distribution and may be used for the implementation of different load-balancing methods. We show an example of the implementation of the proposed interface in the context of realistic rendering of buildings for decisions on interior options.
Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms
(2013)
We present a real-time approximate simulation of some camera errors and the effects these errors have on some common computer vision algorithms for robots. The simulation uses a software framework for real-time post processing of image data. We analyse the performance of some basic algorithms for robotic vision when adding modifications to images due to camera errors. The result of each algorithm / error combination is presented. This simulation is useful to tune robotic algorithms to make them more robust to imperfections of real cameras.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
Application performance improvements through VM parameter modification after runtime analysis
(2013)
We present our approach to extend a Virtual Reality software framework towards the use for Augmented Reality applications. Although VR and AR applications have very similar requirements in terms of abstract components (like 6DOF input, stereoscopic output, simulation engines), the requirements in terms of hardware and software vary considerably. In this article we would like to share the experience gained from adapting our VR software framework for AR applications. We will address design issues for this task. The result is a VR/AR basic software that allows us to implement interactive applications without fixing their type (VR or AR) beforehand. Switching from VR to AR is a matter of changing the configuration file of the application.
We present the extensible post processing framework GrIP, usable for experimenting with screen space-based graphics algorithms in arbitrary applications. The user can easily implement new ideas as well as add known operators as components to existing ones. Through a well-defined interface, operators are realized as plugins that are loaded at run-time. Operators can be combined by defining a post processing graph (PPG) using a specific XML-format where nodes are the operators and edges define their dependencies. User-modifiable parameters can be manipulated through an automatically generated GUI. In this paper we describe our approach, show some example effects and give performance numbers for some of them.
We present a graph-based framework for post processing filters, called GrIP, providing the possibility of arranging and connecting compatible filters in a directed, acyclic graph for realtime image manipulation. This means that the construction of whole filter graphs is possible through an external interface, avoiding the necessity of a recompilation cycle after changes in post processing. Filter graphs are implemented as XML files containing a collection of filter nodes with their parameters as well as linkage (dependency) information. Implemented methods include (but are not restricted to) depth of field, depth darkening and an implementation of screen space shadows, all applicable in real-time, with manipulable parameterizations.
In dieser Arbeit wird eine Methode zur Darstellung und Generierung von natürlich wirkendem Bewuchs auf besonders großen Arealen und unter Berücksichtigung ökologischer Faktoren vorgestellt. Die Generierung und Visualisierung von Bewuchs ist aufgrund der Komplexität biologischer Systeme und des Detailreichtums von Pflanzenmodellen ein herausforderndes Gebiet der Computergrafik und ermöglicht es, den Realismus von Landschaftsvisualisierungen erheblich zu steigern. Aufbauend auf [DMS06] wird bei Silva der Bewuchs so generiert, dass die zur Darstellung benötigten Wang-Kacheln und die mit ihnen assoziierten Teilverteilungen wiederverwendet werden können. Dazu wird ein Verfahren vorgestellt, um Poisson Disk Verteilungen mit variablen Radien auf nahtlosen Wang-Kachelmengen ohne rechenintensive globale Optimierung zu erzeugen. Durch die Einbeziehung von Nachbarschaften und frei konfigurierbaren Generierungspipelines können beliebige abiotische und biotische Faktoren bei der Bewuchsgenerierung berücksichtigt werden. Die durch Silva auf Wang-Kacheln erzeugten Pflanzenverteilungen ermöglichen, die darauf aufgebauten beschleunigenden Datenstrukturen bei der Visualisierung wieder zu verwenden. Durch Multi-Level Instancing und eine Schachtelung von Kd-Bäumen ist eine Visualisierung von großen bewachsenen Arealen mit geringen Renderzeiten und geringem Memoryfootprint von Hunderten Quadratkilometern Größe möglich.
We present a system for interactive magnetic field simulation in an AR-setup. The aim of this work is to investigate how AR technology can help to develop a better understanding of the concept of fields and field lines and their relationship to the magnetic forces in typical school experiments. The haptic feedback is provided by real magnets that are optically tracked. In a stereo video see-through head-mounted display, the magnets are augmented with the dynamically computed field lines.
In diesem Beitrag wird der interaktive Volumenrenderer Volt für die NVIDIA CUDA Architektur vorgestellt. Die Beschleunigung wird durch das Ausnutzen der technischen Eigenschaften des CUDA Device, durch die Partitionierung des Algorithmus und durch die asynchrone Ausführung des CUDA Kernels erreicht. Parallelität wird auf dem Host, auf dem Device und zwischen Host und Device genutzt. Es wird dargestellt, wie die Berechnungen durch den gezielten Einsatz der Ressourcen effizient durchgeführt werden. Die Ergebnisse werden zurückkopiert, so dass der Kernel nicht auf dem zur Anzeige bestimmten Device ausgeführt werden muss. Synchronisation der CUDA Threads ist nicht notwendig.
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.
In this paper we present an approach to efficiently trace single rays on the Cell Processor, instead of using ray packets. To benefit from the performance of this processor, a data structure is chosen which allows traversal without excessive accesses to main memory. Together with careful optimization for SIMD processing, a performance comparable to a packet based ray tracer, running on the same hardware, is achieved. In special cases, when the coherency of the traced rays get very low, it even outperforms the packet based approach.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
This paper describes FGPA-based image combining for parallel graphics systems. The goal of our current work is to reduce network traffic and latency for increasing performance in parallel visualization systems. Initial data distribution is based on a common ethernet network whereas image combining and returning differs to traditional parallel rendering methods. Calculated sub-images are grabbed directly from the DVI-Ports for fast image compositing by a FPGA-based combiner.
We present fast complete rebuild strategies, as well as adapted intelligent local update strategies for acceleration data structures for interactive ray tracing environments. Both approaches can be combined. Although the proposed strategies could be used with other data structures and architectures as well, they are currently tailored to the Bounding Interval Hierarchy on the Cell chip.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
Todays Virtual Environment frameworks use scene graphs to represent virtual worlds. We believe that this is a proper technical approach, but a VE framework should try to model its application area as accurate as possible. Therefore a scene graph is not the best way to represent a virtual world. In this paper we present an easily extensible model to describe entities in the virtual world. Further on we show how this model drives the design of our VE framework and how it is integrated.
Phase Space Rendering
(2007)
We present our work on phase space rendering. Every radiance sample in space has a location and a direction from which it is received. These degrees of freedom make up a phase space. The rendering problem of generating a discrete image from single radiance values is reduced to reconstruct a continuous radiance function from sparse samples in its phase space. The problem of reconstruction in a sparsely sampled space is solved by utilizing scattered data interpolation (SDI) methods. We provide numerical and visual evaluations of experiments with three SDI methods.
We present a Virtual Reality application enabling interactive, physically correct simulation of tube-like flexible objects. Our objective was to describe flexible objects by a set of parameters (length, diameter and material constants) instead of rigid geometry (triangle meshes) and to give the user the possibility to add, delete and manipulate those flexible objects in a stereo projected environment in real-time.
We present an application that allows users to interactively visualise data of medical studies. This application enables the users to get a "first view" on the data set, to interact with the data in an intuitive way and to analyse it collaboratively. The goal of these first studies of the data sets is to find relations between measured values.
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.