Refine
H-BRS Bibliography
- yes (98)
Departments, institutes and facilities
Document Type
- Conference Object (70)
- Article (20)
- Report (4)
- Preprint (2)
- Part of a Book (1)
- Research Data (1)
Year of publication
Keywords
- Virtual Reality (4)
- Ray Tracing (3)
- foveated rendering (3)
- 3D user interface (2)
- 3D user interfaces (2)
- Augmented Reality (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Garbage collection (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Ray tracing (2)
- Rendering (2)
- Unity (2)
- VR (2)
- eye-tracking (2)
- interaction (2)
- user study (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
- AMBER (1)
- AR (1)
- Algorithms (1)
- Bound Volume Hierarchy (1)
- Cell Processor (1)
- Challenges (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computing methodologies (1)
- Containerization (1)
- CyberGlove (1)
- Data structures (1)
- Docker (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Engineering (1)
- Escape analysis (1)
- Eye Tracking (1)
- Filtering (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Force field (1)
- Games (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Geometry (1)
- Global Illumination (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Hardware (1)
- High-resolution displays (1)
- Higher education (1)
- Human centered computing (1)
- Hydrocarbon (1)
- Image Processing (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instantiation (1)
- Interaction devices (1)
- Large display interaction (1)
- Level-of-Detail (1)
- Lighting simulation (1)
- Main Memory (1)
- Memory management (1)
- Methodologies (1)
- Mixed (1)
- Motion Capture (1)
- Multisensory cues (1)
- Musical Performance (1)
- Navigation interface (1)
- Neural representations (1)
- Performance (1)
- Poisson Disc Distribution (1)
- Presence (1)
- Radix Sort (1)
- Ray Casting (1)
- Reflectance modeling (1)
- Render Cache (1)
- School experiments (1)
- Software Architecture (1)
- Software Framework (1)
- Split Axis (1)
- Survey (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Three-dimensional displays (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interfaces (1)
- VR system design (1)
- VR-based systems (1)
- Virtual reality (1)
- Visualization (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Volumenrendering (1)
- Wang-tiles (1)
- accelerometer (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- back-of-device interaction (1)
- bass-shaker (1)
- caching (1)
- co-located collaboration (1)
- collaboration (1)
- component analyses (1)
- computer vision (1)
- data analysis (1)
- data glove (1)
- database (1)
- eye movement (1)
- eye tracking (1)
- gaming (1)
- gaze (1)
- global illumination (1)
- grasp motions (1)
- grasping (1)
- hand guidance (1)
- haptic interfaces (1)
- haptics (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- interaction techniques (1)
- interactive computer graphics (1)
- large-high-resolution displays (1)
- leaning (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- motion capture (1)
- multi-layer display (1)
- multisensory cues (1)
- path tracing (1)
- perceived quality (1)
- performance optimizations (1)
- peripheral vision (1)
- posture analysis (1)
- prehensile motions (1)
- projection based systems (1)
- proxemics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- scene element representation (1)
- sensemaking (1)
- short-term memory (1)
- software engineering (1)
- spatial augmented reality (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- surface textures (1)
- tiled displays (1)
- tools for education (1)
- user engagement (1)
- vibration (1)
- virtual environment framework (1)
- virtual environments (1)
- virtual reality (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
Clusters of commodity PCs are widely considered as the way to go to improve rendering performance and quality in many real-time rendering applications. We describe the design and implementation of our parallel rendering system for real-time rendering applications. Major design objectives for our system are: usage of commodity hardware for all system components, ease of integration into existing Virtual Environments software, and flexibility in applying different rendering techniques, e.g. using ray tracing to render distinct objects with a particularly high quality.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
In this paper we present the steps towards a well-designed concept of a 5VR6 system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of 5VR6 and 5AR6 systems in general to show why, in our opinion, 5VR6 systems are better suited for school-use.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms
(2013)
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
Interactive rendering of complex models has many applications in the Virtual Reality Continuum. The oil&gas industry uses interactive visualizations of huge seismic data sets to evaluate and plan drilling operations. The automotive industry evaluates designs based on very detailed models. Unfortunately, many of these very complex geometric models cannot be displayed with interactive frame rates on graphics workstations. This is due to the limited scalability of their graphics performance. Recently there is a trend to use networked standard PCs to solve this problem. Care must be taken however, because of nonexistent shared memory with clustered PCs. All data and commands have to be sent across the network. It turns out that the removal of the network bottleneck is a challenging problem to solve in this context.In this article we present some approaches for network aware parallel rendering on commodity hardware. These strategies are technological as well as algorithmic solutions.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
Phase Space Rendering
(2007)
Todays Virtual Environment frameworks use scene graphs to represent virtual worlds. We believe that this is a proper technical approach, but a VE framework should try to model its application area as accurate as possible. Therefore a scene graph is not the best way to represent a virtual world. In this paper we present an easily extensible model to describe entities in the virtual world. Further on we show how this model drives the design of our VE framework and how it is integrated.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
"Visual Computing" (VC) fasst als hochgradig aktuelles Forschungsgebiet verschiedene Bereiche der Informatik zusammen, denen gemeinsam ist, dass sie sich mit der Erzeugung und Auswertung visueller Signale befassen. Im Fachbereich Informatik der FH Bonn-Rhein-Sieg nimmt dieser Aspekt eine zentrale Rolle in Lehre und Forschung innerhalb des Studienschwerpunktes Medieninformatik ein. Drei wesentliche Bereiche des VC werden besonders in diversen Lehreinheiten und verschiedenen Projekten vermittelt: Computergrafik, Bildverarbeitung und Hypermedia-Anwendungen. Die Aktivitäten in diesen drei Bereichen fließen zusammen im Kontext immersiver virtueller Visualisierungsumgebungen.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.