Refine
H-BRS Bibliography
- yes (286)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (286) (remove)
Document Type
- Conference Object (203)
- Article (57)
- Report (9)
- Part of a Book (4)
- Doctoral Thesis (4)
- Conference Proceedings (3)
- Part of Periodical (2)
- Book (monograph, edited volume) (1)
- Contribution to a Periodical (1)
- Research Data (1)
Year of publication
Language
- English (286) (remove)
Keywords
- FPGA (12)
- Virtual Reality (10)
- 3D user interface (7)
- virtual reality (7)
- haptics (5)
- Augmented Reality (4)
- Education (4)
- Perception (4)
- Augmented reality (3)
- Hyperspectral image (3)
Most VE-frameworks try to support many different input and output devices. They do not concentrate so much on the rendering because this is tradi- tionally done by graphics workstation. In this short paper we present a modern VE framework that has a small kernel and is able to use different renderers. This includes sound renderers, physics renderers and software based graphics renderers. While our VE framework, named basho is still under development we have an alpha version running under Linux and MacOS X.
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plug-ins.
We present basho, a light weight and easily extendable virtual environment (VE) framework. Key benefits of this framework are independence of the scene element representation and the rendering API. The main goal was to make VE applications flexible without the need to change them, not only by being independent from input and output devices. As an example, with basho it is possible to switch from local illumination models to ray tracing by just replacing the renderer. Or to replace the graphical representation of the scene elements without the need to change the application. Furthermore it is possible to mix rendering technologies within a scene. This paper emphasises on the abstraction of the scene element representation.
A generic approach to describing shape and topography of arbitrary objects is presented, using linguistic variables to combine different features in one fuzzy descriptor. Although the origin of the method lies in molecular visualization and drug design, it can be applied in principle to any surface represented by a polygon mesh. Two approaches to shape description are presented that both lead to linguistic variables that can be used for surface segmentation by means of shape: One approach is based on the calculation of canonical curvatures, the other describes the "embeddedness" of a surface area related to the overall geometry of a 3D object.
The objective of the presented approach is to develop a 3D-reconstruction method for micro organisms from sequences of microscopic images by varying the level-of-focus. The approach is limited to translucent silicatebased marine and freshwater organisms (e.g. radiolarians). The proposed 3D-reconstruction method exploits the connectivity of similarly oriented and spatially adjacent edge elements in consecutive image layers. This yields a 3D-mesh representing the global shape of the objects together with details of the inner structure. Possible applications can be found in comparative morphology or hydrobiology, where e.g. deficiencies in growth and structure during incubation in toxic water or gravity effects on metabolism have to be determined.
The Virtual Memory Palace
(2006)
The intention of the Virtual Memory Palace is to help people memorize information by addressing their visual memory. The concept is based on the “Memory Palace” as an ancient Greek memorization technique, where symbols are placed in a certain way within an imaginative building in order to remember the original information whenever the mind goes through the vision of this building again. The goal of this work was to create such a Memory Palace in a virtual environment, so it requires less creative effort of the contemporary learner than was necessary in ancient Greece. The Virtual Memory Palace offers the possibility to freely explore a virtual 3d architectural model and to place icons at various locations within this model. Specific behaviors were assigned to these locations to make them more memorable. To test the benefit of this concept, an experiment with 15 subjects was conducted. The results show a higher remembrance rate of items learned in the Virtual Memory Palace compared to a wordlist. The observations made during the test showed that most of the subjects enjoyed the memorization environment and were astonished how well the Virtual Memory Palace worked for them.
In this paper, we describe an approach to academic teaching in computer science using storytelling as a means for background research to hypermedia and virtual reality topics. It is shown that narrative activity within the context of a Hypermedia Novel related to educational content can enhance motivation for self-conducted learning and in parallel lead to an edutainment system of its own. The narrative practice and background research as well as the resulting product can supplement lecture material with comparable success to traditional academic teaching approaches.
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
Phase Space Rendering
(2007)
GL-Wrapper for Stereoscopic Rendering of Standard Applications for a PC-based Immersive Environment
(2007)
In the presented project, new approaches for the prevention of hand movements leading to hazards and for non-contact detection of fingers are intended to permit comprehensive and economical protection on circular saws. The basic principles may also be applied to other machines with manual loading and/or unloading. Two new detection principles are explained. The first is the distinction between skin and wood or other material by spectral analysis in the near infrared region. Using LED and photodiodes it is possible to detect fingers and hands reliable. With a kind of light curtain the intrusion into the dangerous zone near the blade can be prevented. The second principle is video image processing to detect persons, arms and fingers. In the first stage of development the detection of upper limb extremities within a defined hazard area by means of a computer based video image analysis is investigated.
A Bicycle Simulator Based on a Motion Platform in a Virtual Reality Environment - FIVIS Project
(2007)
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualization environment which provides visual data relative to the motion and activity of the test bicycle. This implies the bike rider has to pedal and steer the bicycle as they would a traditional bicycle, while forward motion is recorded and processed to control the visualization. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform reflect the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
In this paper we present an ongoing research work dedicated to a Virtual-Reality-based product customization application development. The work is addressing the problem of flexible and quick customization of products from a great number of parts. Our application is an effective instrument that can be simultaneously used by two users for rapid assembly tasks, allowing engineers and designers to work collaboratively. Furthermore, it is directly connected to a manufacturing environment, which is able to produce the product right after customization. In the paper we describe the architecture of the application, our interaction and assembly techniques, and explain how the system can be integrated into a manufacturing environment.
Video Surveillance is in the center of research due to high importance of safety and security issues. Usually, humans have to monitor an area and often they have to do this for 24 hours a day. Thus, it would be desirable to have automatic surveillance systems that support this job automatically. The system described in this paper is such an automatic surveillance system that has been developed to detect several dangerous situations in a subway station. This paper discusses the high-level module of the system. Herein, an expert system is used to detect events.
Todays Virtual Environment frameworks use scene graphs to represent virtual worlds. We believe that this is a proper technical approach, but a VE framework should try to model its application area as accurate as possible. Therefore a scene graph is not the best way to represent a virtual world. In this paper we present an easily extensible model to describe entities in the virtual world. Further on we show how this model drives the design of our VE framework and how it is integrated.
A Low-Cost Based 6 DoF Head Tracker for Usability Application Studies in Virtual Environments
(2008)
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualisation environment which provides visual data relative to the motion and activity of the test bicycle. That means the bike rider has to pedal and steer the bicycle as a usual bicycle, while the motion is recorded and processed to control the simulation. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform match to the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach
(2009)
An easy to build and cost-effective 3D tracking solution is presented, using Nintendo Wii Remotes acting as cameras. As the hardware differs from usual tracking cameras, the calibration and tracking process has to be adapted accordingly. The tracking approach described could be used for tracking the user's motions in video games based upon physical activity (sports, fighting or dancing games), allowing the player to interact with the game in a more intuitive way than by just pressing buttons.
How Does Self-Perception Influence the Choice of Study? E-Portfolio and Gender Issues in Informatics
(2009)
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.