Refine
H-BRS Bibliography
- yes (313) (remove)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (313) (remove)
Document Type
- Conference Object (210)
- Article (65)
- Report (14)
- Part of a Book (6)
- Conference Proceedings (5)
- Book (monograph, edited volume) (4)
- Doctoral Thesis (4)
- Part of Periodical (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Virtuelle Realität (4)
- Augmented reality (3)
- Hyperspectral image (3)
- Image Processing (3)
- Perceptual Upright (3)
- Ray Tracing (3)
- Ray tracing (3)
- Virtual reality (3)
- Visualization (3)
- education (3)
- foveated rendering (3)
- guidance (3)
- low power (3)
- 3D user interfaces (2)
- Algorithms (2)
- CUDA (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- EEG (2)
- Einführung (2)
- Elektronik (2)
- Eye Tracking (2)
- Force field (2)
- Forschungsbericht (2)
- Fuzzy logic (2)
- Garbage collection (2)
- Gravitation (2)
- Human factors (2)
- Intelligent virtual agents (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Performance (2)
- Raman microscopy (2)
- Raumwahrnehmung (2)
- Remote laboratory (2)
- Serious Games (2)
- Three-dimensional displays (2)
- Unity (2)
- VR (2)
- computer vision (2)
- digital design (2)
- edutainment (2)
- energy awareness (2)
- eye-tracking (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interaction (2)
- interface design (2)
- leaning (2)
- machine vision (2)
- microcontroller (2)
- multisensory cues (2)
- navigation (2)
- pansharpening (2)
- peripheral vision (2)
- remote lab (2)
- serious games (2)
- software engineering (2)
- spatial updating (2)
- technological literacy (2)
- user study (2)
- vibration (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D shape (1)
- AMBER (1)
- AR (1)
- Active locomotion (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Agents (1)
- Alkane (1)
- Aufrecht (1)
- BLOB Detection (1)
- Bachelor-Studiengang (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Camera selection (1)
- Camera view analysis (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Chemical imaging (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Cognition (1)
- Cognitive informatics (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Created Gravity (1)
- Cross-sensitivity (1)
- Current measurement (1)
- CyberGlove (1)
- Cybersickness (1)
- Data structures (1)
- Datalog (1)
- Demonstration-based training (1)
- Digital Storytelling (1)
- Digitaltechnik (1)
- Displacement (1)
- Docker (1)
- ERP (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Educational methods (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Emotion (1)
- Entropy (1)
- Escape analysis (1)
- Evaluation board (1)
- Event detection (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Feedback (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Foveated rendering (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Geometry (1)
- Global Illumination (1)
- Global illumination (1)
- Grailog (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HDBR (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- High-resolution displays (1)
- Higher education (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschulehre (1)
- Human Factors (1)
- Human computer interaction (1)
- Human orientation perception (1)
- Human-Computer Interaction (1)
- Hydrocarbon (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Information interaction (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantiation (1)
- Instruction design (1)
- Interaction (1)
- Interaction devices (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Interventionstudie (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Laboratories (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Lehr-Lernpsychologie (1)
- Lehrbuch (1)
- Lehre (1)
- Lernen (1)
- Lernumgebung (1)
- Lighting simulation (1)
- Locomotion (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- MP2.5 (1)
- Main Memory (1)
- Management (1)
- Master-Studiengang (1)
- Materialwissenschaften (1)
- Measurement (1)
- Memory management (1)
- Methodologies (1)
- Mikrogravitation (1)
- Modular software packages (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- Multiuser (1)
- Musical Performance (1)
- N200 (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Neural representations (1)
- Neuroscience (1)
- Numerical optimization (1)
- OER (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Orientierung (1)
- Outer Space Research (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Presence (1)
- Pro-MINT-us (1)
- Programming (1)
- Proximity (1)
- Psychology (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- Radfahren (1)
- Radiance caching (1)
- Radix Sort (1)
- Raumfahrt (1)
- Ray Casting (1)
- Reasoning (1)
- Recommender systems (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Rendering (1)
- Reversible Logic Synthesis (1)
- Robotics (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Scalable Vector Graphic (1)
- School experiments (1)
- Second Life (1)
- Self-motion perception (1)
- Sense of presence (1)
- Shadow detection (1)
- Smartphone (1)
- Social Virtual Reality (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Studienreform (1)
- Supervised classification (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Tracking (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- UI design (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- VR system design (1)
- VR-based systems (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual Computing (1)
- Visual perception (1)
- Visuelle Wahrnehmung (1)
- Volumenrendering (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Weltraumforschung (1)
- XML (1)
- XSLT (1)
- Young adults (1)
- Zentrifuge (1)
- accelerometer (1)
- adaptive agents (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- back-of-device interaction (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- collision (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- controller design (1)
- cooperative path planning (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- database (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electrochemical sensor (1)
- electronics (1)
- embedded systems (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- engineering education (1)
- engineering for non-engineers (1)
- evaluation board development (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- field programmable gate arrays (1)
- flying (1)
- fpga (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- gaze (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heat shrink tubes (1)
- heavy metal (1)
- human computer interaction (1)
- human-centric lighting (1)
- hydrocarbon (1)
- image processing (1)
- immersion (1)
- immersive Visualisierung (1)
- immersive systems (1)
- information display methods (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interaction techniques (1)
- interactive computer graphics (1)
- ion-selective electrodes (1)
- large-high-resolution displays (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- life-long learning (1)
- lifelong learning (1)
- linguistic variable (1)
- linguistic variables (1)
- lipid (1)
- locomotion (1)
- locomotion interface (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- mood (1)
- motion capture (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- neural networks (1)
- neuro-cognitive performance (1)
- non-engineering students (1)
- octane (1)
- optical tracking (1)
- path tracing (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance optimizations (1)
- peripheral visual field (1)
- photometry (1)
- physical activity (1)
- physical model immersive (1)
- posture analysis (1)
- prefrontal cortex (1)
- prehensile motions (1)
- programming (1)
- project management (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- remote-lab (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensory perception (1)
- short-term memory (1)
- simulator (1)
- situation awareness (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- story authoring (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- teleportation (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- tools for education (1)
- travel techniques (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- video lectures (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- weight perception (1)
- whole-body interface (1)
- workday breaks (1)
- workspace awareness (1)
- zooming interface (1)
- zooming interfaces (1)
When users in virtual reality cannot physically walk and self-motions are instead only visually simulated, spatial updating is often impaired. In this paper, we report on a study that investigated if HeadJoystick, an embodied leaning-based flying interface, could improve performance in a 3D navigational search task that relies on maintaining situational awareness and spatial updating in VR. We compared it to Gamepad, a standard flying interface. For both interfaces, participants were seated on a swivel chair and controlled simulated rotations by physically rotating. They either leaned (forward/backward, right/left, up/down) or used the Gamepad thumbsticks for simulated translation. In a gamified 3D navigational search task, participants had to find eight balls within 5 min. Those balls were hidden amongst 16 randomly positioned boxes in a dark environment devoid of any landmarks. Compared to the Gamepad, participants collected more balls using the HeadJoystick. It also minimized the distance travelled, motion sickness, and mental task demand. Moreover, the HeadJoystick was rated better in terms of ease of use, controllability, learnability, overall usability, and self-motion perception. However, participants rated HeadJoystick could be more physically fatiguing after a long use. Overall, participants felt more engaged with HeadJoystick, enjoyed it more, and preferred it. Together, this provides evidence that leaning-based interfaces like HeadJoystick can provide an affordable and effective alternative for flying in VR and potentially telepresence drones.
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
While humans can effortlessly pick a view from multiple streams, automatically choosing the best view is a challenge. Choosing the best view from multi-camera streams poses a problem regarding which objective metrics should be considered. Existing works on view selection lack consensus about which metrics should be considered to select the best view. The literature on view selection describes diverse possible metrics. And strategies such as information-theoretic, instructional design, or aesthetics-motivated fail to incorporate all approaches. In this work, we postulate a strategy incorporating information-theoretic and instructional design-based objective metrics to select the best view from a set of views. Traditionally, information-theoretic measures have been used to find the goodness of a view, such as in 3D rendering. We adapted a similar measure known as the viewpoint entropy for real-world 2D images. Additionally, we incorporated similarity penalization to get a more accurate measure of the entropy of a view, which is one of the metrics for the best view selection. Since the choice of the best view is domain-dependent, we chose demonstration-based training scenarios as our use case. The limitation of our chosen scenarios is that they do not include collaborative training and solely feature a single trainer. To incorporate instructional design considerations, we included the trainer’s body pose, face, face when instructing, and hands visibility as metrics. To incorporate domain knowledge we included predetermined regions’ visibility as another metric. All of those metrics are taken into account to produce a parameterized view recommendation approach for demonstration-based training. An online study using recorded multi-camera video streams from a simulation environment was used to validate those metrics. Furthermore, the responses from the online study were used to optimize the view recommendation performance with a normalized discounted cumulative gain (NDCG) value of 0.912, which shows good performance with respect to matching user choices.
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.