Refine
H-BRS Bibliography
- yes (313)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (313) (remove)
Document Type
- Conference Object (210)
- Article (65)
- Report (14)
- Part of a Book (6)
- Conference Proceedings (5)
- Book (monograph, edited volume) (4)
- Doctoral Thesis (4)
- Part of Periodical (2)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Virtuelle Realität (4)
- Augmented reality (3)
- Hyperspectral image (3)
- Image Processing (3)
- Perceptual Upright (3)
- Ray Tracing (3)
- Ray tracing (3)
- Virtual reality (3)
- Visualization (3)
- education (3)
- foveated rendering (3)
- guidance (3)
- low power (3)
- 3D user interfaces (2)
- Algorithms (2)
- CUDA (2)
- Computer Graphics (2)
- Content Module (2)
- Distributed rendering (2)
- EEG (2)
- Einführung (2)
- Elektronik (2)
- Eye Tracking (2)
- Force field (2)
- Forschungsbericht (2)
- Fuzzy logic (2)
- Garbage collection (2)
- Gravitation (2)
- Human factors (2)
- Intelligent virtual agents (2)
- Java virtual machine (2)
- Large, high-resolution displays (2)
- Original Story (2)
- Performance (2)
- Raman microscopy (2)
- Raumwahrnehmung (2)
- Remote laboratory (2)
- Serious Games (2)
- Three-dimensional displays (2)
- Unity (2)
- VR (2)
- computer vision (2)
- digital design (2)
- edutainment (2)
- energy awareness (2)
- eye-tracking (2)
- human factors (2)
- hypermedia (2)
- image fusion (2)
- interaction (2)
- interface design (2)
- leaning (2)
- machine vision (2)
- microcontroller (2)
- multisensory cues (2)
- navigation (2)
- pansharpening (2)
- peripheral vision (2)
- remote lab (2)
- serious games (2)
- software engineering (2)
- spatial updating (2)
- technological literacy (2)
- user study (2)
- vibration (2)
- virtual environments (2)
- 2D Level Design (1)
- 3D User Interface (1)
- 3D Visualisierung (1)
- 3D gaming (1)
- 3D interfaces (1)
- 3D navigation (1)
- 3D shape (1)
- AMBER (1)
- AR (1)
- Active locomotion (1)
- Adaptive Behavior (1)
- Adaptive Control (1)
- Agents (1)
- Alkane (1)
- Aufrecht (1)
- BLOB Detection (1)
- Bachelor-Studiengang (1)
- Bicycle Simulator (1)
- Blob Detection (1)
- Bound Volume Hierarchy (1)
- Bounding Box (1)
- Camera selection (1)
- Camera view analysis (1)
- Cell Processor (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Centrifuge (1)
- Chalcogenide glass sensor (1)
- Challenges (1)
- Chemical imaging (1)
- Clusters (1)
- Co-located Collaboration (1)
- Co-located work (1)
- Codes (1)
- Cognition (1)
- Cognitive informatics (1)
- Computer graphics (1)
- Computer-supported Cooperative Work (1)
- Computergrafik (1)
- Container Structure (1)
- Containerization (1)
- Correlative Microscopy (1)
- Created Gravity (1)
- Cross-sensitivity (1)
- Current measurement (1)
- CyberGlove (1)
- Cybersickness (1)
- Data structures (1)
- Datalog (1)
- Demonstration-based training (1)
- Digital Storytelling (1)
- Digitaltechnik (1)
- Displacement (1)
- Docker (1)
- ERP (1)
- Echtzeit (1)
- Ecosystem simulation (1)
- Educational institutions (1)
- Educational methods (1)
- Edutainment (1)
- Electromagnetic Fields (1)
- Electronic tongue (1)
- Emotion (1)
- Entropy (1)
- Escape analysis (1)
- Evaluation board (1)
- Event detection (1)
- Executive functions (1)
- Exercise (1)
- Expert system (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Feedback (1)
- Five Factor Model (1)
- Fixed spatial data (1)
- Focus plus context (1)
- Foreground segmentation (1)
- Foveated rendering (1)
- Game Engine (1)
- Games (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gaze Depth Estimation (1)
- Gaze-contingent depth-of-field (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Geometry (1)
- Global Illumination (1)
- Global illumination (1)
- Grailog (1)
- Group Behavior (1)
- Group behavior (1)
- Group behavior analysis (1)
- Groupware (1)
- HCI (1)
- HDBR (1)
- Hand Guidance (1)
- Hand Tracking (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head Mounted Display (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- High-performance computing (1)
- High-resolution displays (1)
- Higher education (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschulehre (1)
- Human Factors (1)
- Human computer interaction (1)
- Human orientation perception (1)
- Human-Computer Interaction (1)
- Hydrocarbon (1)
- Image-based rendering (1)
- Immersion (1)
- Immersive Visualization Environment (1)
- Immersive analytics (1)
- Information interaction (1)
- Informations-, Kommunikations- und Medientechnologie (1)
- Instantiation (1)
- Instruction design (1)
- Interaction (1)
- Interaction devices (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Interventionstudie (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Laboratories (1)
- Large display interaction (1)
- Large high-resolution displays (1)
- Lehr-Lernpsychologie (1)
- Lehrbuch (1)
- Lehre (1)
- Lernen (1)
- Lernumgebung (1)
- Lighting simulation (1)
- Locomotion (1)
- Low-power design (1)
- Low-power digital design (1)
- Low-power education (1)
- MP2.5 (1)
- Main Memory (1)
- Management (1)
- Master-Studiengang (1)
- Materialwissenschaften (1)
- Measurement (1)
- Memory management (1)
- Methodologies (1)
- Mikrogravitation (1)
- Modular software packages (1)
- Molecular modeling (1)
- Molecular rotation (1)
- Motion (1)
- Motion Capture (1)
- Motion Sickness (1)
- Multi-camera (1)
- Multi-component heavy metal solution (1)
- Multilayer interaction (1)
- Multimodal Microspectroscopy (1)
- Multimodal hyperspectral data (1)
- Multisensory cues (1)
- Multiuser (1)
- Musical Performance (1)
- N200 (1)
- NVIDIA Tesla (1)
- Narration Module (1)
- Navigation (1)
- Navigation interface (1)
- Neural representations (1)
- Neuroscience (1)
- Numerical optimization (1)
- OER (1)
- Older adults (1)
- Organic compounds and Functional groups (1)
- Orientierung (1)
- Outer Space Research (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Personality (1)
- Physical activity (1)
- Physical exercising game platform (1)
- Pointing (1)
- Pointing devices (1)
- Poisson Disc Distribution (1)
- Pose Estimation (1)
- Power dissipation (1)
- Power measurement (1)
- Presence (1)
- Pro-MINT-us (1)
- Programming (1)
- Proximity (1)
- Psychology (1)
- Qualitätspakt Lehre (1)
- Quantum mechanical methods (1)
- Radfahren (1)
- Radiance caching (1)
- Radix Sort (1)
- Raumfahrt (1)
- Ray Casting (1)
- Reasoning (1)
- Recommender systems (1)
- Registration Refinement (1)
- Remote lab (1)
- Render Cache (1)
- Rendering (1)
- Reversible Logic Synthesis (1)
- Robotics (1)
- RuleML (1)
- S3D Video (1)
- S3D video (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Scalable Vector Graphic (1)
- School experiments (1)
- Second Life (1)
- Self-motion perception (1)
- Sense of presence (1)
- Shadow detection (1)
- Smartphone (1)
- Social Virtual Reality (1)
- Software Architecture (1)
- Software Framework (1)
- Somatogravic Illusion (1)
- Split Axis (1)
- Standards (1)
- Star Trek (1)
- Stereoscopic Rendering (1)
- Stereoscopic rendering (1)
- Story Element (1)
- Studienreform (1)
- Supervised classification (1)
- Survey (1)
- Swim Stroke Analysis (1)
- Switches (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Tactile Feedback (1)
- Tactile feedback (1)
- Terrain rendering (1)
- Tiled displays (1)
- Tiled-display walls (1)
- Topology (1)
- Touchscreen interaction (1)
- Touchscreens (1)
- Tracking (1)
- Traffic Simulations (1)
- Travel Techniques (1)
- UI design (1)
- Usability (1)
- User Roles (1)
- User Study (1)
- User engagement (1)
- User interface (1)
- User interfaces (1)
- User-Centered Approach (1)
- VR system design (1)
- VR-based systems (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- View selection (1)
- Virtual Agents (1)
- Virtual Environment (1)
- Virtual Environments (1)
- Virtual Memory Palace (1)
- Virtual attention (1)
- Virtual environments (1)
- Visual Computing (1)
- Visual perception (1)
- Visuelle Wahrnehmung (1)
- Volumenrendering (1)
- Wahrnehmung (1)
- Wang-tiles (1)
- Weltraumforschung (1)
- XML (1)
- XSLT (1)
- Young adults (1)
- Zentrifuge (1)
- accelerometer (1)
- adaptive agents (1)
- adaptive filters (1)
- adaptive trigger (1)
- affective computing (1)
- analysis (1)
- audio-tactile feedback (1)
- augmented, and virtual realities (1)
- authoring tools (1)
- automation (1)
- back-of-device interaction (1)
- bass-shaker (1)
- bicycle (1)
- body-centric cues (1)
- brain computer interfaces (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- chemical sensors (1)
- co-located collaboration (1)
- collaboration (1)
- collision (1)
- colorimetry (1)
- compensation (1)
- component analyses (1)
- computational logic (1)
- computer-supported collaborative work (1)
- controller design (1)
- cooperative path planning (1)
- cyanide (1)
- cybersickness (1)
- data analysis (1)
- data glove (1)
- data logging (1)
- database (1)
- depth perception (1)
- digital storytelling (1)
- directed hypergraphs (1)
- disabled people (1)
- e-learning (1)
- educational methods (1)
- electrical devices (1)
- electrical engineering education (1)
- electrochemical sensor (1)
- electronics (1)
- embedded systems (1)
- embodied interfaces (1)
- embroidery machine (1)
- emotion computing (1)
- engineering education (1)
- engineering for non-engineers (1)
- evaluation board development (1)
- eye movement (1)
- eye tracking (1)
- fiducial marker (1)
- field programmable gate arrays (1)
- flying (1)
- fpga (1)
- fuel (1)
- full-body interface (1)
- fuzzy logic (1)
- game engine (1)
- gaming (1)
- gaze (1)
- graphs (1)
- grasp motions (1)
- grasping (1)
- gravito-inertial force (1)
- hand guidance (1)
- hands-on lab (1)
- hands-on labs (1)
- haptic feedback (1)
- haptic interfaces (1)
- head down bed rest (1)
- heat shrink tubes (1)
- heavy metal (1)
- human computer interaction (1)
- human-centric lighting (1)
- hydrocarbon (1)
- image processing (1)
- immersion (1)
- immersive Visualisierung (1)
- immersive systems (1)
- information display methods (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interaction techniques (1)
- interactive computer graphics (1)
- ion-selective electrodes (1)
- large-high-resolution displays (1)
- leaning, self-motion perception (1)
- leaning-based interfaces (1)
- life-long learning (1)
- lifelong learning (1)
- linguistic variable (1)
- linguistic variables (1)
- lipid (1)
- locomotion (1)
- locomotion interface (1)
- low-power design (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller education (1)
- microcontrollers (1)
- mixed reality (1)
- mobile applications (1)
- mobile projection (1)
- momentary frequency (1)
- monitoring (1)
- mood (1)
- motion capture (1)
- motion cueing (1)
- motion platform (1)
- motion sickness (1)
- multi-layer display (1)
- multi-user VR (1)
- multiresolution analysis (1)
- multisensory (1)
- multisensory interface (1)
- natural user interface (1)
- navigational search (1)
- neural networks (1)
- neuro-cognitive performance (1)
- non-engineering students (1)
- octane (1)
- optical tracking (1)
- path tracing (1)
- pen interaction (1)
- perceived quality (1)
- perception of upright (1)
- performance optimizations (1)
- peripheral visual field (1)
- photometry (1)
- physical activity (1)
- physical model immersive (1)
- posture analysis (1)
- prefrontal cortex (1)
- prehensile motions (1)
- programming (1)
- project management (1)
- projection based systems (1)
- proxemics (1)
- psychophysics (1)
- quantum mechanics (1)
- rapid prototyping tool (1)
- ray tracing (1)
- real-time (1)
- region of interest (1)
- remote-lab (1)
- robotic arm (1)
- robotic evaluation (1)
- robotics (1)
- rules (1)
- scene element representation (1)
- security (1)
- see-through display (1)
- see-through head-mounted displays (1)
- semantic image seg-mentation (1)
- semi-continuous locomotion (1)
- sensemaking (1)
- sensory perception (1)
- short-term memory (1)
- simulator (1)
- situation awareness (1)
- space flight analog (1)
- spatial augmented reality (1)
- spatial orientation (1)
- spectral rendering (1)
- spinal posture (1)
- stereoscopic vision (1)
- story authoring (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- surface textures (1)
- surface topography (1)
- teleportation (1)
- telepresence (1)
- territoriality (1)
- tiled displays (1)
- time series processing (1)
- tools for education (1)
- travel techniques (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user acceptance (1)
- user engagement (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- video lectures (1)
- view management (1)
- virtual environment framework (1)
- virtual locomotion (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- visualisation (1)
- visuohaptic feedback (1)
- wearable sensor (1)
- weight perception (1)
- whole-body interface (1)
- workday breaks (1)
- workspace awareness (1)
- zooming interface (1)
- zooming interfaces (1)
3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach
(2009)
An easy to build and cost-effective 3D tracking solution is presented, using Nintendo Wii Remotes acting as cameras. As the hardware differs from usual tracking cameras, the calibration and tracking process has to be adapted accordingly. The tracking approach described could be used for tracking the user's motions in video games based upon physical activity (sports, fighting or dancing games), allowing the player to interact with the game in a more intuitive way than by just pressing buttons.
3D user interfaces for virtual reality and games: 3D selection, manipulation, and spatial navigation
(2018)
In this course, we will take a detailed look at different topics in the field of 3D user interfaces (3DUIs) for Virtual Reality and Gaming. With the advent of Augmented and Virtual Reality in numerous application areas, the need and interest in more effective interfaces becomes prevalent, among others driven forward by improved technologies, increasing application complexity and user experience requirements. Within this course, we highlight key issues in the design of diverse 3DUIs by looking closely into both simple and advanced 3D selection/manipulation and spatial navigation interface design topics. These topics are highly relevant, as they form the basis for most 3DUI-driven application, yet also can cause major issues (performance, usability, experience. motion sickness) when not designed properly as they can be difficult to handle. Within this course, we build on top of a general understanding of 3DUIs to discuss typical pitfalls by looking closely at theoretical and practical aspects of selection, manipulation, and navigation and highlight guidelines for their use.
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
The objective of the presented approach is to develop a 3D-reconstruction method for micro organisms from sequences of microscopic images by varying the level-of-focus. The approach is limited to translucent silicatebased marine and freshwater organisms (e.g. radiolarians). The proposed 3D-reconstruction method exploits the connectivity of similarly oriented and spatially adjacent edge elements in consecutive image layers. This yields a 3D-mesh representing the global shape of the objects together with details of the inner structure. Possible applications can be found in comparative morphology or hydrobiology, where e.g. deficiencies in growth and structure during incubation in toxic water or gravity effects on metabolism have to be determined.
The objective of this research project is to develop a user-friendly and cost-effective interactive input device that allows intuitive and efficient manipulation of 3D objects (6 DoF) in virtual reality (VR) visualization environments with flat projections walls. During this project, it was planned to develop an extended version of a laser pointer with multiple laser beams arranged in specific patterns. Using stationary cameras observing projections of these patterns from behind the screens, it is planned to develop an algorithm for reconstruction of the emitter’s absolute position and orientation in space. Laser pointer concept is an intuitive way of interaction that would provide user with a familiar, mobile and efficient navigation though a 3D environment. In order to navigate in a 3D world, it is required to know the absolute position (x, y and z position) and orientation (roll, pitch and yaw angles) of the device, a total of 6 degrees of freedom (DoF). Ordinary laser-based pointers when captured on a flat surface with a video camera system and then processed, will only provide x and y coordinates effectively reducing available input to 2 DoF only. In order to overcome this problem, an additional set of multiple (invisible) laser pointers should be used in the pointing device. These laser pointers should be arranged in a way that the projection of their rays will form one fixed dot pattern when intersected with the flat surface of projection screens. Images of such a pattern will be captured via a real-time camera-based system and then processed using mathematical re-projection algorithms. This would allow the reconstruction of the full absolute 3D pose (6 DoF) of the input device. Additionally, multi-user or collaborative work should be supported by the system, would allow several users to interact with a virtual environment at the same time. Possibilities to port processing algorithms into embedded processors or FPGAs will be investigated during this project as well.
A Bicycle Simulator Based on a Motion Platform in a Virtual Reality Environment - FIVIS Project
(2007)
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
A Low-Cost Based 6 DoF Head Tracker for Usability Application Studies in Virtual Environments
(2008)
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
In this paper we present an ongoing research work dedicated to a Virtual-Reality-based product customization application development. The work is addressing the problem of flexible and quick customization of products from a great number of parts. Our application is an effective instrument that can be simultaneously used by two users for rapid assembly tasks, allowing engineers and designers to work collaboratively. Furthermore, it is directly connected to a manufacturing environment, which is able to produce the product right after customization. In the paper we describe the architecture of the application, our interaction and assembly techniques, and explain how the system can be integrated into a manufacturing environment.
We present a system that combines voxel and polygonal representations into a single octree acceleration structure that can be used for ray tracing. Voxels are well-suited to create good level-of-detail for high-frequency models where polygonal simplifications usually fail due to the complex structure of the model. However, polygonal descriptions provide the higher visual fidelity. In addition, voxel representations often oversample the geometric domain especially for large triangles, whereas a few polygons can be tested for intersection more quickly.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.