Prof. Dr. André Hinkenjann
Refine
H-BRS Bibliography
- yes (97)
Departments, institutes and facilities
Document Type
- Conference Object (70)
- Article (19)
- Report (3)
- Conference Proceedings (2)
- Part of a Book (1)
- Research Data (1)
- Preprint (1)
Year of publication
Keywords
- Virtual Reality (4)
- Ray Tracing (3)
- foveated rendering (3)
- 3D user interface (2)
- 3D user interfaces (2)
- Augmented Reality (2)
- CUDA (2)
- Computer Graphics (2)
- Distributed rendering (2)
- Garbage collection (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Unity (2)
- VR (2)
- eye-tracking (2)
- interaction (2)
- user study (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D interfaces (1)
- AMBER (1)
- AR (1)
- Algorithms (1)
- Bound Volume Hierarchy (1)
- Cell Processor (1)
- Challenges (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computing methodologies (1)
- Containerization (1)
- CyberGlove (1)
- Docker (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Engineering (1)
- Escape analysis (1)
- Eye Tracking (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Force field (1)
- Games (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Global Illumination (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- Hand Guidance (1)
- Hand Tracking (1)
- High-resolution displays (1)
- Higher education (1)
- Human centered computing (1)
- Hydrocarbon (1)
- Image Processing (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive analytics (1)
- Information interaction (1)
- Instantiation (1)
- Interaction devices (1)
- Large display interaction (1)
- Lighting simulation (1)
- Main Memory (1)
- Memory management (1)
- Methodologies (1)
- Mixed (1)
- Motion Capture (1)
- Multisensory cues (1)
- Musical Performance (1)
- Navigation interface (1)
- Neural representations (1)
- Performance (1)
- Poisson Disc Distribution (1)
- Presence (1)
- Radix Sort (1)
- Ray Casting (1)
- Ray tracing (1)
- Reflectance modeling (1)
- Render Cache (1)
- Rendering (1)
- School experiments (1)
- Software Architecture (1)
- Software Framework (1)
- Split Axis (1)
- Survey (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interfaces (1)
- VR system design (1)
- VR-based systems (1)
- Virtual reality (1)
- Visualization (1)
- Visualization design and evaluation methods (1)
- Visualization systems and tools (1)
- Volumenrendering (1)
- Wang-tiles (1)
- accelerometer (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- back-of-device interaction (1)
- bass-shaker (1)
- co-located collaboration (1)
- collaboration (1)
- component analyses (1)
- computer vision (1)
- data analysis (1)
- data glove (1)
- database (1)
- eye movement (1)
- eye tracking (1)
- gaming (1)
- gaze (1)
- grasp motions (1)
- grasping (1)
- hand guidance (1)
- haptic interfaces (1)
- haptics (1)
- human computer interaction (1)
- human-centric lighting (1)
- immersion (1)
- interaction techniques (1)
- interactive computer graphics (1)
- large-high-resolution displays (1)
- leaning (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- motion capture (1)
- multi-layer display (1)
- multisensory cues (1)
- path tracing (1)
- perceived quality (1)
- performance optimizations (1)
- peripheral vision (1)
- posture analysis (1)
- prehensile motions (1)
- projection based systems (1)
- proxemics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- scene element representation (1)
- sensemaking (1)
- short-term memory (1)
- software engineering (1)
- spatial augmented reality (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- surface textures (1)
- tiled displays (1)
- tools for education (1)
- user engagement (1)
- vibration (1)
- virtual environment framework (1)
- virtual environments (1)
- virtual reality (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- whole-body interface (1)
- zooming interface (1)
- zooming interfaces (1)
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.
A New Approach of Using Two Wireless Tracking Systems in Mobile Augmented Reality Applications
(2003)
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Application performance improvements through VM parameter modification after runtime analysis
(2013)
In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.
This article describes an approach to rapidly prototype the parameters of a Java application run on the IBM J9 Virtual Machine in order to improve its performance. It works by analyzing VM output and searching for behavioral patterns. These patterns are matched against a list of known patterns for which rules exist that specify how to adapt the VM to a given application. Adapting the application is done by adding parameters and changing existing ones. The process is fully automated and carried out by a toolkit. The toolkit iteratively cycles through multiple possible parameter sets, benchmarks them and proposes the best alternative to the user. The user can, without any prior knowledge about the Java application or the VM improve the performance of the deployed application and quickly cycle through a multitude of different settings to benchmark them. When tested with the representative benchmarks, improvements of up to 150% were achieved.
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
We propose a high-performance GPU implementation of Ray Histogram Fusion (RHF), a denoising method for stochastic global illumination rendering. Based on the CPU implementation of the original algorithm, we present a naive GPU implementation and the necessary optimization steps. Eventually, we show that our optimizations increase the performance of RHF by two orders of magnitude when compared to the original CPU implementation and one order of magnitude compared to the naive GPU implementation. We show how the quality for identical rendering times relates to unfiltered path tracing and how much time is needed to achieve identical quality when compared to an unfiltered path traced result. Finally, we summarize our work and describe possible future applications and research based on this.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.