Refine
H-BRS Bibliography
- yes (210)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (210) (remove)
Document Type
- Conference Object (210) (remove)
Year of publication
Keywords
- FPGA (10)
- Virtual Reality (6)
- 3D user interface (4)
- Education (4)
- Hyperspectral image (3)
- Image Processing (3)
- virtual reality (3)
- Algorithms (2)
- Augmented reality (2)
- CUDA (2)
Robust Indoor Localization Using Optimal Fusion Filter For Sensors And Map Layout Information
(2014)
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
The Render Cache [1,2] allows the interactive display of very large scenes, rendered with complex global illumination models, by decoupling camera movement from the costly scene sampling process. In this paper, the distributed execution of the individual components of the Render Cache on a PC cluster is shown to be a viable alternative to the shared memory implementation.As the processing power of an entire node can be dedicated to a single component, more advanced algorithms may be examined. Modular functional units also lead to increased flexibility, useful in research as well as industrial applications.We introduce a new strategy for view-driven scene sampling, as well as support for multiple camera viewpoints generated from the same cache. Stereo display and a CAVE multi-camera setup have been implemented.The use of the highly portable and inter-operable CORBA networking API simplifies the integration of most existing pixel-based renderers. So far, three renderers (C++ and Java) have been adapted to function within our framework.
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.
Für die prototypische Erstellung von Virtual Reality (VR) Szenen auf Grundlage realer Umgebungen bieten sich Daten aus aktuellen Panorama-Kameras an. Diese Daten eignen sich jedoch nicht unmittelbar für die Integration in eine Game Engine. Wir stellen daher ein projektionsbasiertes Verfahren vor, mit dem Bilder und Videos im Fischaugenformat, wie sie z.B. die 360 Kamera Ricoh Theta erstellt, ohne Konvertierung in Echtzeit mit Hilfe der Unity Game Engine visualisiert werden können. Es wird weiterhin gezeigt, dass ein Panoramabild mit diesem Verfahren leicht manuell um grobe Tiefeninformation erweitert werden kann, sodass bei einer Darstellung in VR ein grober räumlicher Eindruck der Szene für einfach prototypische Umsetzungen ermöglicht wird.
Traditionally traffic simulations are used to predict traffic jams, plan new roads or highways, and estimate road safety. They are also used in computer games and virtual environments. There are two general concepts of modeling traffic: macroscopic and microscopic modeling. Macroscopic traffic models take vehicle collectives into account and do not consider individual vehicles. Parameters like average velocity and density are used to model the flow of traffic. In contrast, microscopic traffic models consider each vehicle individually. Therefore, vehicle specific parameters are of importance, e.g. current velocity, desired velocity, velocity difference to the lead vehicle, individual time gap.
In this paper we present an ongoing research work dedicated to a Virtual-Reality-based product customization application development. The work is addressing the problem of flexible and quick customization of products from a great number of parts. Our application is an effective instrument that can be simultaneously used by two users for rapid assembly tasks, allowing engineers and designers to work collaboratively. Furthermore, it is directly connected to a manufacturing environment, which is able to produce the product right after customization. In the paper we describe the architecture of the application, our interaction and assembly techniques, and explain how the system can be integrated into a manufacturing environment.
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
Ray Tracing, accurate physical simulations with collision detection, particle systems and spatial audio rendering are only a few components that become more and more interesting for Virtual Environments due to the steadily increasing computing power. Many components use geometric queries for their calculations. To speed up those queries spatial data structures are used. These data structures are mostly implemented for every problem individually resulting in many individually maintained parts, unnecessary memory consumption and waste of computing power to maintain all the individual data structures. We propose a design for a centralized spatial data structure that can be used everywhere within the system.
The perceived direction of “up” is determined by gravity, visual information, and an internal estimate of body orientation (Mittelstaedt, 1983; Dyde et al., 2006). Is the gravity level found on other worlds sufficient to maintain gravity’s contribution to this perception? Difficulties in stability reported anecdotally by astronauts on the lunar surface (NASA 1972) suggest that the moon’s gravity may not be, despite this value being far above the threshold for detecting linear acceleration. Knowing how much gravity is needed to provide a reliable orientation cue is required for training and preparing astronauts for future missions to the moon, mars and beyond.
The perceived distance of self motion induced in a stationary observer by optic flow is overestimated (Redlick et al., Vis Res. 2001 41: 213). Here we assessed how different components of translational optic flow contribute to perceived distance traveled. Subjects sat on a stationary bicycle in front of a virtual reality display that extended beyond 90deg on each side. They monocularly viewed a target presented in a virtual hallway wallpapered with stripes that changed colour to prevent tracking individual stripes. Subjects then looked centrally or 30, 60 or 90° eccentrically while their view was restricted to an ellipse with faded edges (25 x 42deg) centered on their fixation. Subjects judged when they had reached the target’s remembered position. Perceptual gain (perceived/actual distance traveled) was highest when subjects were looking in a direction that depended on the simulated speed of motion. Results were modeled as the sum of separate mechanisms sensitive to radial and laminar optic flow. In our display distances were perceived as compressed. However, there was no correlation between perceptual compression and perceived speed of motion. These results suggest that visually induced self motion in virtual displays can be subject to large but predictable error.
Perception is one of the most important cognitive capabilities of an entity since it determines how an entity perceives its environment. The presented work focuses on providing cost efficient but realistic perceptual processes for intelligent virtual agents (IVAs) or NPCs with the goal of providing a sound information basis for the entities' decision making processes. In addition, an agent-central perception process should rovide a common interface for developers to retrieve data from the IVAs' environment. The overall process is evaluated by applying it to a scenario demonstrating its benefits. The evaluation indicates, that such a realistically simulated perception process provides a powerful instrument to enhance the (perceived) realism of an IVA's simulated behavior.
Traffic simulations are generally used to forecast traffic behavior or to simulate non-player characters in computer games and virual environments. These systems are usually modeled in such a way that traffic rules are strictly followed. However, rule violations are a common part of real-life traffic and thus should be integrated into such models.
In this paper, we describe an approach to academic teaching in computer science using storytelling as a means for background research to hypermedia and virtual reality topics. It is shown that narrative activity within the context of a Hypermedia Novel related to educational content can enhance motivation for self-conducted learning and in parallel lead to an edutainment system of its own. The narrative practice and background research as well as the resulting product can supplement lecture material with comparable success to traditional academic teaching approaches.
A generic approach to describing shape and topography of arbitrary objects is presented, using linguistic variables to combine different features in one fuzzy descriptor. Although the origin of the method lies in molecular visualization and drug design, it can be applied in principle to any surface represented by a polygon mesh. Two approaches to shape description are presented that both lead to linguistic variables that can be used for surface segmentation by means of shape: One approach is based on the calculation of canonical curvatures, the other describes the "embeddedness" of a surface area related to the overall geometry of a 3D object.
In this paper, we describe an approach to academic teaching in computer science using storytelling as a means to investigate to hypermedia and virtual reality topics. Indications are shown that narrative activity within the context of a Hypermedia Novel related to educational content can enhance motivation for self-conducted learning and in parallel lead to an edutainment system of its own. In contrast to existing approaches the Hypermedia Novel environment allows an iterative approach to the narrative content, thereby integrating story authoring and story reception not only in the beginning but at any time. The narrative practice and background research as well as the resulting product can supplement lecture material with comparable success to traditional academic teaching approaches. On top of this there is the added value of soft skill training and a gain of expert knowledge in areas of personal background research.
The work being described in this paper is the result of a cooperation project between the Institute of Visual Computing at the Bonn-Rhein-Sieg University of Applied Sciences, Germany and the Laboratory of Biomedical Engineering at the Federal University of Uberlândia, Brazil. The aim of the project is the development of a virtual environment based training simulator which enables for better and faster learning the control of upper limb prostheses. The focus of the paper is the description of the technical setup since learning tutorials still need to be developed as well as a comprehensive evaluation still needs to be carried out.