Refine
Departments, institutes and facilities
- Fachbereich Informatik (121)
- Institute of Visual Computing (IVC) (105)
- Institut für Sicherheitsforschung (ISF) (7)
- Institut für funktionale Gen-Analytik (IFGA) (6)
- Fachbereich Angewandte Naturwissenschaften (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (3)
- Fachbereich Wirtschaftswissenschaften (3)
- Präsidium (1)
Document Type
- Conference Object (86)
- Article (20)
- Report (12)
- Part of a Book (7)
Year of publication
Keywords
- FPGA (3)
- Image Processing (3)
- Virtual Reality (3)
- virtual reality (3)
- Hyperspectral image (2)
- Intelligent virtual agents (2)
- Perceptual Upright (2)
- Raman microscopy (2)
- Serious Games (2)
- Virtuelle Realität (2)
- Visualization (2)
- computer vision (2)
- image fusion (2)
- machine vision (2)
- pansharpening (2)
- serious games (2)
- AR (1)
- Adaptive Behavior (1)
- Agents (1)
- Algorithms (1)
- Aufrecht (1)
- BLOB Detection (1)
- Blob Detection (1)
- Bounding Box (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Chemical imaging (1)
- Datalog (1)
- Design automation (1)
- Discrete cosine transform (1)
- Displacement (1)
- EEG (1)
- ERP (1)
- Educational institutions (1)
- Emotion (1)
- Event detection (1)
- Exercise (1)
- Expert system (1)
- Eye Tracking (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Field programmable gate arrays (1)
- Five Factor Model (1)
- Foreground segmentation (1)
- Forschungsbericht (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Grailog (1)
- Graphical user interfaces (1)
- Gravitation (1)
- HDBR (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- Human Factors (1)
- Human orientation perception (1)
- Integrated circuit interconnections (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Management (1)
- Measurement (1)
- Mikrogravitation (1)
- Motion (1)
- Multimodal hyperspectral data (1)
- Multiuser (1)
- N200 (1)
- NVIDIA Tesla (1)
- Orientierung (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Perception (1)
- Personality (1)
- Pointing (1)
- Pointing devices (1)
- Pose Estimation (1)
- Radfahren (1)
- Raumfahrt (1)
- Raumwahrnehmung (1)
- Reasoning (1)
- Reversible Logic Synthesis (1)
- RuleML (1)
- SLIDE algorithm (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Second Life (1)
- Sense of presence (1)
- Shadow detection (1)
- Signal detection (1)
- Signal processing (1)
- Social Virtual Reality (1)
- Somatogravic Illusion (1)
- Standards (1)
- Supervised classification (1)
- Swim Stroke Analysis (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Three-dimensional displays (1)
- Tracking (1)
- Traffic Simulations (1)
- Usability (1)
- VR (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- Virtual Agents (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual environments (1)
- Virtual reality (1)
- Visual perception (1)
- Wahrnehmung (1)
- Watermarking (1)
- Weltraumforschung (1)
- XML (1)
- XNA Game Studio (1)
- XSLT (1)
- Zentrifuge (1)
- adaptive agents (1)
- analysis (1)
- automation (1)
- background motion (1)
- bicycle (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- computational logic (1)
- computer games (1)
- cooperative path planning (1)
- correlation (1)
- data logging (1)
- data visualisation (1)
- directed hypergraphs (1)
- distance perception (1)
- distributed processing (1)
- fiducial marker (1)
- fpga (1)
- graphs (1)
- gravito-inertial force (1)
- head down bed rest (1)
- heat shrink tubes (1)
- image sequence processing (1)
- immersive Visualisierung (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interactive computer graphics (1)
- interactive distributed rendering (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcomputers (1)
- microcontroller (1)
- mixed reality (1)
- monitoring (1)
- mood (1)
- morphological operator (1)
- motion estimation (1)
- motion platform (1)
- motion trajectory enhancement (1)
- multi-screen visualization environments (1)
- multi-user VR (1)
- multiple Xbox 360 (1)
- multiple computer systems (1)
- multiresolution analysis (1)
- neural networks (1)
- neuro-cognitive performance (1)
- noise suppression (1)
- optic flow (1)
- optical tracking (1)
- perception of upright (1)
- physical activity (1)
- physical model immersive (1)
- prefrontal cortex (1)
- projection (1)
- rendering (computer graphics) (1)
- robotic arm (1)
- robotic evaluation (1)
- rules (1)
- screens (display) (1)
- security (1)
- self-motion perception (1)
- semantic image seg-mentation (1)
- simulator (1)
- space flight analog (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- virtual environments (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- workday breaks (1)
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
GL-Wrapper for Stereoscopic Rendering of Standard Applications for a PC-based Immersive Environment
(2007)
In this paper we present an ongoing research work dedicated to a Virtual-Reality-based product customization application development. The work is addressing the problem of flexible and quick customization of products from a great number of parts. Our application is an effective instrument that can be simultaneously used by two users for rapid assembly tasks, allowing engineers and designers to work collaboratively. Furthermore, it is directly connected to a manufacturing environment, which is able to produce the product right after customization. In the paper we describe the architecture of the application, our interaction and assembly techniques, and explain how the system can be integrated into a manufacturing environment.
Video Surveillance is in the center of research due to high importance of safety and security issues. Usually, humans have to monitor an area and often they have to do this for 24 hours a day. Thus, it would be desirable to have automatic surveillance systems that support this job automatically. The system described in this paper is such an automatic surveillance system that has been developed to detect several dangerous situations in a subway station. This paper discusses the high-level module of the system. Herein, an expert system is used to detect events.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.
A Low-Cost Based 6 DoF Head Tracker for Usability Application Studies in Virtual Environments
(2008)
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualisation environment which provides visual data relative to the motion and activity of the test bicycle. That means the bike rider has to pedal and steer the bicycle as a usual bicycle, while the motion is recorded and processed to control the simulation. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform match to the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
Der Mutterpass wurde als wichtiges Vorsorgeinstrument für Schwangere Anfang der sechziger Jahre in Papierform eingeführt. Er wird bei 90% aller Schwangerschaften genutzt. Seit seiner Einführung im Jahre 1968 hat jedoch die Komplexität der Vorsorgeuntersuchungen zugenommen, wie auch die Begleitumstände einer Schwangerschaft häufig komplexer geworden sind. Dies war Anlass dafür, die elektronische Abbildung des Papier basierten Mutterpasses zu entwickeln, um den gewachsenen Anforderungen der medizinischen Dokumentation und Evaluation gerecht zu werden. Eine große Herausforderung bei der Konzeption und Entwicklung des elektronischen Mutterpasses war dabei die Definition eines strukturierten und maschinenlesbaren Austauschformates. Darüber hinaus mussten weltweit neue eindeutige Identifier entwickelt werden, um den Mutterpass elektronisch abzubilden. Nach der prototypischen Realisierung einer vollständigen Version wurde im Frühjahr 2008 die Pilotierung in der Metropolregion Rhein-Neckar begonnen.
"Visual Computing" (VC) fasst als hochgradig aktuelles Forschungsgebiet verschiedene Bereiche der Informatik zusammen, denen gemeinsam ist, dass sie sich mit der Erzeugung und Auswertung visueller Signale befassen. Im Fachbereich Informatik der FH Bonn-Rhein-Sieg nimmt dieser Aspekt eine zentrale Rolle in Lehre und Forschung innerhalb des Studienschwerpunktes Medieninformatik ein. Drei wesentliche Bereiche des VC werden besonders in diversen Lehreinheiten und verschiedenen Projekten vermittelt: Computergrafik, Bildverarbeitung und Hypermedia-Anwendungen. Die Aktivitäten in diesen drei Bereichen fließen zusammen im Kontext immersiver virtueller Visualisierungsumgebungen.
3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach
(2009)
An easy to build and cost-effective 3D tracking solution is presented, using Nintendo Wii Remotes acting as cameras. As the hardware differs from usual tracking cameras, the calibration and tracking process has to be adapted accordingly. The tracking approach described could be used for tracking the user's motions in video games based upon physical activity (sports, fighting or dancing games), allowing the player to interact with the game in a more intuitive way than by just pressing buttons.
How Does Self-Perception Influence the Choice of Study? E-Portfolio and Gender Issues in Informatics
(2009)
The perceptual upright results from the multisensory integration of the directions indicated by vision and gravity as well as a prior assumption that upright is towards the head. The direction of gravity is signalled by multiple cues, the predominant of which are the otoliths of the vestibular system and somatosensory information from contact with the support surface. Here, we used neutral buoyancy to remove somatosensory information while retaining vestibular cues, thus "splitting the gravity vector" leaving only the vestibular component. In this way, neutral buoyancy can be used as a microgravity analogue. We assessed spatial orientation using the oriented character recognition test (OChaRT, which yields the perceptual upright, PU) under both neutrally buoyant and terrestrial conditions. The effect of visual cues to upright (the visual effect) was reduced under neutral buoyancy compared to on land but the influence of gravity was unaffected. We found no significant change in the relative weighting of vision, gravity, or body cues, in contrast to results found both in long-duration microgravity and during head-down bed rest. These results indicate a relatively minor role for somatosensation in determining the perceptual upright in the presence of vestibular cues. Short-duration neutral buoyancy is a weak analogue for microgravity exposure in terms of its perceptual consequences compared to long-duration head-down bed rest.
The perceived distance of self motion induced in a stationary observer by optic flow is overestimated (Redlick et al., Vis Res. 2001 41: 213). Here we assessed how different components of translational optic flow contribute to perceived distance traveled. Subjects sat on a stationary bicycle in front of a virtual reality display that extended beyond 90deg on each side. They monocularly viewed a target presented in a virtual hallway wallpapered with stripes that changed colour to prevent tracking individual stripes. Subjects then looked centrally or 30, 60 or 90° eccentrically while their view was restricted to an ellipse with faded edges (25 x 42deg) centered on their fixation. Subjects judged when they had reached the target’s remembered position. Perceptual gain (perceived/actual distance traveled) was highest when subjects were looking in a direction that depended on the simulated speed of motion. Results were modeled as the sum of separate mechanisms sensitive to radial and laminar optic flow. In our display distances were perceived as compressed. However, there was no correlation between perceptual compression and perceived speed of motion. These results suggest that visually induced self motion in virtual displays can be subject to large but predictable error.
This contribution presents an easy to implement 3D tracking approach that works with a single standard webcam. We describe the algorithm and show that it is well suited for being used as an intuitive interaction method in 3D video games. The algorithm can detect and distinguish multiple objects in real-time and obtain their orientation and position relative to the camera. The trackable objects are equipped with planar patterns of five visual markers. By tracking (stereo) glasses worn by the user and adjusting the in-game camera's viewing frustum accordingly, the well-known immersive "screen as a window" effect can be achieved, even without the use of any special tracking equipment.
Reversible logic synthesis is an emerging research topic with different application areas like low-power CMOS design, quantum- and optical computing. The key motivation behind reversible logic synthesis is the optimization of the heat dissipation problem current architectures show, by reducing it to theoretically zero [2].
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
This contribution describes an optical laser-based user interaction system designed for virtual reality (VR) environments. The project's objective is to realize a 6-DoF user input device for interaction with VR applications running in CAVE-type visualization environments with flat projections walls. In case of a back-projection VR system, in contrast to optical tracking systems, no camera has to be placed within the visualization environment. Instead, cameras observe patterns of laser beam projections from behind the screens. These patterns are emitted by a hand-held input device. The system is robust with respect to partial occlusion of the laser pattern. An inertial measurement unit is integrated into the device in order to improve robustness and precision.