Refine
H-BRS Bibliography
- yes (313)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (313) (remove)
Document Type
- Conference Object (210)
- Article (65)
- Report (14)
- Part of a Book (6)
- Conference Proceedings (5)
- Book (monograph, edited volume) (4)
- Doctoral Thesis (4)
- Part of Periodical (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Virtuelle Realität (4)
- Augmented reality (3)
- Hyperspectral image (3)
- Image Processing (3)
- Perceptual Upright (3)
- Ray Tracing (3)
- Ray tracing (3)
- Virtual reality (3)
- Visualization (3)
- education (3)
- foveated rendering (3)
- guidance (3)
- low power (3)
- 3D user interfaces (2)
- Algorithms (2)
- CUDA (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- EEG (2)
- Einführung (2)
- Elektronik (2)
- Eye Tracking (2)
- Force field (2)
- Forschungsbericht (2)
- Fuzzy logic (2)
- Garbage collection (2)
- Gravitation (2)
- Human factors (2)
- Intelligent virtual agents (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Performance (2)
- Raman microscopy (2)
- Raumwahrnehmung (2)
- Remote laboratory (2)
- Serious Games (2)
- Three-dimensional displays (2)
- Unity (2)
- VR (2)
- computer vision (2)
- digital design (2)
- edutainment (2)
- energy awareness (2)
- eye-tracking (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interaction (2)
- interface design (2)
- leaning (2)
- machine vision (2)
- microcontroller (2)
- multisensory cues (2)
- navigation (2)
- pansharpening (2)
- peripheral vision (2)
- remote lab (2)
- serious games (2)
- software engineering (2)
- spatial updating (2)
- technological literacy (2)
- user study (2)
- vibration (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D shape (1)
- AMBER (1)
- AR (1)
- Active locomotion (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Agents (1)
- Alkane (1)
- Aufrecht (1)
- BLOB Detection (1)
- Bachelor-Studiengang (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Camera selection (1)
- Camera view analysis (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Chemical imaging (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Cognition (1)
- Cognitive informatics (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Created Gravity (1)
- Cross-sensitivity (1)
- Current measurement (1)
- CyberGlove (1)
- Cybersickness (1)
- Data structures (1)
- Datalog (1)
- Demonstration-based training (1)
- Digital Storytelling (1)
- Digitaltechnik (1)
- Displacement (1)
- Docker (1)
- ERP (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Educational methods (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Emotion (1)
- Entropy (1)
- Escape analysis (1)
- Evaluation board (1)
- Event detection (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Feedback (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Foveated rendering (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Geometry (1)
- Global Illumination (1)
- Global illumination (1)
- Grailog (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HDBR (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- High-resolution displays (1)
- Higher education (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschulehre (1)
- Human Factors (1)
- Human computer interaction (1)
- Human orientation perception (1)
- Human-Computer Interaction (1)
- Hydrocarbon (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Information interaction (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantiation (1)
- Instruction design (1)
- Interaction (1)
- Interaction devices (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Interventionstudie (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Laboratories (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Lehr-Lernpsychologie (1)
- Lehrbuch (1)
- Lehre (1)
- Lernen (1)
- Lernumgebung (1)
- Lighting simulation (1)
- Locomotion (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- MP2.5 (1)
- Main Memory (1)
- Management (1)
- Master-Studiengang (1)
- Materialwissenschaften (1)
- Measurement (1)
- Memory management (1)
- Methodologies (1)
- Mikrogravitation (1)
- Modular software packages (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- Multiuser (1)
- Musical Performance (1)
- N200 (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Neural representations (1)
- Neuroscience (1)
- Numerical optimization (1)
- OER (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Orientierung (1)
- Outer Space Research (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Presence (1)
- Pro-MINT-us (1)
- Programming (1)
- Proximity (1)
- Psychology (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- Radfahren (1)
- Radiance caching (1)
- Radix Sort (1)
- Raumfahrt (1)
- Ray Casting (1)
- Reasoning (1)
- Recommender systems (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Rendering (1)
- Reversible Logic Synthesis (1)
- Robotics (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Scalable Vector Graphic (1)
- School experiments (1)
- Second Life (1)
- Self-motion perception (1)
- Sense of presence (1)
- Shadow detection (1)
- Smartphone (1)
- Social Virtual Reality (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Studienreform (1)
- Supervised classification (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Tracking (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- UI design (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- VR system design (1)
- VR-based systems (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual Computing (1)
- Visual perception (1)
- Visuelle Wahrnehmung (1)
- Volumenrendering (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Weltraumforschung (1)
- XML (1)
- XSLT (1)
- Young adults (1)
- Zentrifuge (1)
- accelerometer (1)
- adaptive agents (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- back-of-device interaction (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- collision (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- controller design (1)
- cooperative path planning (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- database (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electrochemical sensor (1)
- electronics (1)
- embedded systems (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- engineering education (1)
- engineering for non-engineers (1)
- evaluation board development (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- field programmable gate arrays (1)
- flying (1)
- fpga (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- gaze (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heat shrink tubes (1)
- heavy metal (1)
- human computer interaction (1)
- human-centric lighting (1)
- hydrocarbon (1)
- image processing (1)
- immersion (1)
- immersive Visualisierung (1)
- immersive systems (1)
- information display methods (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interaction techniques (1)
- interactive computer graphics (1)
- ion-selective electrodes (1)
- large-high-resolution displays (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- life-long learning (1)
- lifelong learning (1)
- linguistic variable (1)
- linguistic variables (1)
- lipid (1)
- locomotion (1)
- locomotion interface (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- mood (1)
- motion capture (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- neural networks (1)
- neuro-cognitive performance (1)
- non-engineering students (1)
- octane (1)
- optical tracking (1)
- path tracing (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance optimizations (1)
- peripheral visual field (1)
- photometry (1)
- physical activity (1)
- physical model immersive (1)
- posture analysis (1)
- prefrontal cortex (1)
- prehensile motions (1)
- programming (1)
- project management (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- remote-lab (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensory perception (1)
- short-term memory (1)
- simulator (1)
- situation awareness (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- story authoring (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- teleportation (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- tools for education (1)
- travel techniques (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- video lectures (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- weight perception (1)
- whole-body interface (1)
- workday breaks (1)
- workspace awareness (1)
- zooming interface (1)
- zooming interfaces (1)
Selection Performance and Reliability of Eye and Head Gaze Tracking Under Varying Light Conditions
(2024)
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer’s perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target’s previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts’ performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Das Interesse an Virtual Reality (VR) für die Hochschullehre steigt aktuell vermehrt durch die Möglichkeit, logistisch schwierige Aufgaben abzubilden sowie aufgrund positiver Ergebnisse aus Wirksamkeitsstudien. Gleichzeitig fehlt es jedoch an Studien, die immersive VR-Umgebungen, nicht-immersive Desktop-Umgebungen und konventionelle Lernmaterialien gegenüberstellen und lehr-lernmethodische Aspekte evaluieren. Aus diesem Grund beschäftigt sich dieser Beitrag mit der Konzeption und Realisierung einer Lernumgebung für die Hochschullehre, die sowohl mit einem Head Mounted Display (HMD) als auch mittels Desktops genutzt werden kann, sowie deren Evaluation anhand eines experimentellen Gruppendesigns. Die Lernumgebung wurde auf Basis einer eigens entwickelten Softwareplattform erstellt und die Wirksamkeit mithilfe von zwei Experimentalgruppen – VR vs. Desktop-Umgebung – und einer Kontrollgruppe evaluiert und verglichen. In einer Pilotstudie konnten sowohl qualitativ als auch quantitativ positive Einschätzungen der Usability der Lernumgebung in beiden Experimentalgruppen herausgestellt werden. Darüber hinaus zeigten sich positive Effekte auf die kognitive und affektive Wirkung der Lernumgebung im Vergleich zu konventionellen Lernmaterialien. Unterschiede zwischen der Nutzung als VR- oder Desktop-Umgebung zeigen sich auf kognitiver und affektiver Ebene jedoch kaum. Die Analyse von Log-Daten deutet allerdings auf Unterschiede im Lern- und Explorationsverhalten hin.
The visual and auditory quality of computer-mediated stimuli for virtual and extended reality (VR/XR) is rapidly improving. Still, it remains challenging to provide a fully embodied sensation and awareness of objects surrounding, approaching, or touching us in a 3D environment, though it can greatly aid task performance in a 3D user interface. For example, feedback can provide warning signals for potential collisions (e.g., bumping into an obstacle while navigating) or pinpointing areas where one’s attention should be directed to (e.g., points of interest or danger). These events inform our motor behaviour and are often associated with perception mechanisms associated with our so-called peripersonal and extrapersonal space models that relate our body to object distance, direction, and contact point/impact. We will discuss these references spaces to explain the role of different cues in our motor action responses that underlie 3D interaction tasks. However, providing proximity and collision cues can be challenging. Various full-body vibration systems have been developed that stimulate body parts other than the hands, but can have limitations in their applicability and feasibility due to their cost and effort to operate, as well as hygienic considerations associated with e.g., Covid-19. Informed by results of a prior study using low-frequencies for collision feedback, in this paper we look at an unobtrusive way to provide spatial, proximal and collision cues. Specifically, we assess the potential of foot sole stimulation to provide cues about object direction and relative distance, as well as collision direction and force of impact. Results indicate that in particular vibration-based stimuli could be useful within the frame of peripersonal and extrapersonal space perception that support 3DUI tasks. Current results favor the feedback combination of continuous vibrotactor cues for proximity, and bass-shaker cues for body collision. Results show that users could rather easily judge the different cues at a reasonably high granularity. This granularity may be sufficient to support common navigation tasks in a 3DUI.
We describe a systematic approach for rendering time-varying simulation data produced by exa-scale simulations, using GPU workstations. The data sets we focus on use adaptive mesh refinement (AMR) to overcome memory bandwidth limitations by representing interesting regions in space with high detail. Particularly, our focus is on data sets where the AMR hierarchy is fixed and does not change over time. Our study is motivated by the NASA Exajet, a large computational fluid dynamics simulation of a civilian cargo aircraft that consists of 423 simulation time steps, each storing 2.5 GB of data per scalar field, amounting to a total of 4 TB. We present strategies for rendering this time series data set with smooth animation and at interactive rates using current generation GPUs. We start with an unoptimized baseline and step by step extend that to support fast streaming updates. Our approach demonstrates how to push current visualization workstations and modern visualization APIs to their limits to achieve interactive visualization of exa-scale time series data sets.
The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
Modern GPUs come with dedicated hardware to perform ray/triangle intersections and bounding volume hierarchy (BVH) traversal. While the primary use case for this hardware is photorealistic 3D computer graphics, with careful algorithm design scientists can also use this special-purpose hardware to accelerate general-purpose computations such as point containment queries. This article explains the principles behind these techniques and their application to vector field visualization of large simulation data using particle tracing.