Refine
H-BRS Bibliography
- yes (210)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (210) (remove)
Document Type
- Conference Object (210) (remove)
Year of publication
Keywords
- FPGA (10)
- Virtual Reality (6)
- 3D user interface (4)
- Education (4)
- Hyperspectral image (3)
- Image Processing (3)
- virtual reality (3)
- Algorithms (2)
- Augmented reality (2)
- CUDA (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- Eye Tracking (2)
- Intelligent virtual agents (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Raman microscopy (2)
- Remote laboratory (2)
- Serious Games (2)
- digital design (2)
- education (2)
- edutainment (2)
- haptics (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interface design (2)
- low power (2)
- machine vision (2)
- microcontroller (2)
- navigation (2)
- pansharpening (2)
- remote lab (2)
- serious games (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D shape (1)
- 3D user interfaces (1)
- Active locomotion (1)
- Augmented Reality (1)
- BLOB Detection (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Current measurement (1)
- Datalog (1)
- Digital Storytelling (1)
- Docker (1)
- EEG (1)
- Echtzeit (1)
- Educational institutions (1)
- Evaluation board (1)
- Fixed spatial data (1)
- Force field (1)
- Fuzzy logic (1)
- Game Engine (1)
- Games and Simulations for Learning (1)
- Garbage collection (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gender Issues in Computer Science Education (1)
- Global Illumination (1)
- Grailog (1)
- Group Behavior (1)
- Group behavior (1)
- Groupware (1)
- HCI (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- Higher education (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Human Factors (1)
- Image-based rendering (1)
- Immersive Visualization Environment (1)
- Information interaction (1)
- Interaction (1)
- Internet (1)
- Interoperability (1)
- Inventory (1)
- Java virtual machine (1)
- JavaScript (1)
- Laboratories (1)
- Lighting simulation (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- Main Memory (1)
- Management (1)
- Measurement (1)
- Modular software packages (1)
- Molecular modeling (1)
- Motion Capture (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multiuser (1)
- Musical Performance (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Numerical optimization (1)
- OER (1)
- Parallel Processing (1)
- Parallelization (1)
- Perception (1)
- Performance (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Pro-MINT-us (1)
- Programming (1)
- Qualitätspakt Lehre (1)
- Radix Sort (1)
- Ray Casting (1)
- Ray Tracing (1)
- Ray tracing (1)
- Reasoning (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Reversible Logic Synthesis (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Scalable Vector Graphic (1)
- Second Life (1)
- Social Virtual Reality (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Three-dimensional displays (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Traffic Simulations (1)
- UI design (1)
- Unity (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User-Centered Approach (1)
- VR (1)
- VR system design (1)
- Verilog (1)
- Virtual Agents (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual environments (1)
- Visualization (1)
- Volumenrendering (1)
- XML (1)
- XSLT (1)
- adaptive agents (1)
- adaptive filters (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- cooperative path planning (1)
- data logging (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electronics (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- energy awareness (1)
- engineering education (1)
- evaluation board development (1)
- eye-tracking (1)
- field programmable gate arrays (1)
- foveated rendering (1)
- fpga (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- graphs (1)
- guidance (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- heat shrink tubes (1)
- human-centric lighting (1)
- image processing (1)
- immersion (1)
- immersive systems (1)
- information display methods (1)
- interaction (1)
- interaction techniques (1)
- leaning (1)
- leaning, self-motion perception (1)
- life-long learning (1)
- linguistic variable (1)
- linguistic variables (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- motion cueing (1)
- motion platform (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory cues (1)
- multisensory interface (1)
- natural user interface (1)
- neural networks (1)
- pen interaction (1)
- performance optimizations (1)
- peripheral vision (1)
- peripheral visual field (1)
- photometry (1)
- physical model immersive (1)
- programming (1)
- project management (1)
- proxemics (1)
- remote-lab (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- short-term memory (1)
- simulator (1)
- software engineering (1)
- spectral rendering (1)
- stereoscopic vision (1)
- story authoring (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- technological literacy (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user study (1)
- vection (1)
- vibration (1)
- video lectures (1)
- virtual environment framework (1)
- virtual locomotion (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- whole-body interface (1)
- workspace awareness (1)
- zooming interface (1)
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
Seit 2012 wird an der Hochschule Bonn-Rhein-Sieg die Studieneingangsphase im Qualitätspakt Lehre gefördert. Ein wesentliches Anliegen im Projekt „Pro-MINT-us“ ist die Einbeziehung der gesamten Hochschule, um keine isolierten Maßnahmen anzubieten, sondern die im Projekt entwickelten Lehrideen nachhaltig zu verankern.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
Qualifikation für gute Lehre
(2010)
Eine von insgesamt sechs Arbeitsgruppen der Jahrestagung des HRK Bologna-Zentrums 2009 beschäftigte sich mit dem Themenbereich "Qualifikation für gute Lehre". Nach zwei Impulsvorträgen diskutierten die Teilnehmer, wie Hochschulangehörige noch stärker als bisher für die Lehre
qualifiziert werden können.
This paper introduces FaceHaptics, a novel haptic display based on a robot arm attached to a head-mounted virtual reality display. It provides localized, multi-directional and movable haptic cues in the form of wind, warmth, moving and single-point touch events and water spray to dedicated parts of the face not covered by the head-mounted display.The easily extensible system, however, can principally mount any type of compact haptic actuator or object. User study 1 showed that users appreciate the directional resolution of cues, and can judge wind direction well, especially when they move their head and wind direction is adjusted dynamically to compensate for head rotations. Study 2 showed that adding FaceHaptics cues to a VR walkthrough can significantly improve user experience, presence, and emotional responses.
Rendering techniques for design evaluation and review or for visualizing large volume data often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates. A view direction based rendering technique renders the users central field of view in high quality whereas the surrounding is rendered with a level of detail approach depending on the distance to the users central field of view thus giving the opportunity to increase rendering efficiency. We propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
In contrast to projection-based systems, large, high resolution multi-display systems offer a high pixel density on a large visualization area. This enables users to step up to the displays and see a small but highly detailed area. If the users move back a few steps they don't perceive details at pixel level but will instead get an overview of the whole visualization. Rendering techniques for design evaluation and review or for visualizing large volume data (e.g. Big Data applications) often use computationally expensive ray-based methods. Due to the number of pixels and the amount of data, these methods often do not achieve interactive frame rates.
A view direction based (VDB) rendering technique renders the user's central field of view in high quality whereas the surrounding is rendered with a level-of-detail approach depending on the distance to the user's central field of view. This approach mimics the physiology of the human eye and conserves the advantage of highly detailed information when standing close to the multi-display system as well as the general overview of the whole scene. In this paper we propose a prototype implementation and evaluation of a focus-based rendering technique based on a hybrid ray tracing/sparse voxel octree rendering approach.
In dieser Arbeit wird eine Methode zur Darstellung und Generierung von natürlich wirkendem Bewuchs auf besonders großen Arealen und unter Berücksichtigung ökologischer Faktoren vorgestellt. Die Generierung und Visualisierung von Bewuchs ist aufgrund der Komplexität biologischer Systeme und des Detailreichtums von Pflanzenmodellen ein herausforderndes Gebiet der Computergrafik und ermöglicht es, den Realismus von Landschaftsvisualisierungen erheblich zu steigern. Aufbauend auf [DMS06] wird bei Silva der Bewuchs so generiert, dass die zur Darstellung benötigten Wang-Kacheln und die mit ihnen assoziierten Teilverteilungen wiederverwendet werden können. Dazu wird ein Verfahren vorgestellt, um Poisson Disk Verteilungen mit variablen Radien auf nahtlosen Wang-Kachelmengen ohne rechenintensive globale Optimierung zu erzeugen. Durch die Einbeziehung von Nachbarschaften und frei konfigurierbaren Generierungspipelines können beliebige abiotische und biotische Faktoren bei der Bewuchsgenerierung berücksichtigt werden. Die durch Silva auf Wang-Kacheln erzeugten Pflanzenverteilungen ermöglichen, die darauf aufgebauten beschleunigenden Datenstrukturen bei der Visualisierung wieder zu verwenden. Durch Multi-Level Instancing und eine Schachtelung von Kd-Bäumen ist eine Visualisierung von großen bewachsenen Arealen mit geringen Renderzeiten und geringem Memoryfootprint von Hunderten Quadratkilometern Größe möglich.