Refine
Departments, institutes and facilities
- Fachbereich Informatik (121)
- Institute of Visual Computing (IVC) (105)
- Institut für Sicherheitsforschung (ISF) (7)
- Institut für funktionale Gen-Analytik (IFGA) (6)
- Fachbereich Angewandte Naturwissenschaften (3)
- Fachbereich Ingenieurwissenschaften und Kommunikation (3)
- Fachbereich Wirtschaftswissenschaften (3)
- Präsidium (1)
Document Type
- Conference Object (86)
- Article (20)
- Report (12)
- Part of a Book (7)
Year of publication
Keywords
- FPGA (3)
- Image Processing (3)
- Virtual Reality (3)
- virtual reality (3)
- Hyperspectral image (2)
- Intelligent virtual agents (2)
- Perceptual Upright (2)
- Raman microscopy (2)
- Serious Games (2)
- Virtuelle Realität (2)
- Visualization (2)
- computer vision (2)
- image fusion (2)
- machine vision (2)
- pansharpening (2)
- serious games (2)
- AR (1)
- Adaptive Behavior (1)
- Agents (1)
- Algorithms (1)
- Aufrecht (1)
- BLOB Detection (1)
- Blob Detection (1)
- Bounding Box (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Chemical imaging (1)
- Datalog (1)
- Design automation (1)
- Discrete cosine transform (1)
- Displacement (1)
- EEG (1)
- ERP (1)
- Educational institutions (1)
- Emotion (1)
- Event detection (1)
- Exercise (1)
- Expert system (1)
- Eye Tracking (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Field programmable gate arrays (1)
- Five Factor Model (1)
- Foreground segmentation (1)
- Forschungsbericht (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Grailog (1)
- Graphical user interfaces (1)
- Gravitation (1)
- HDBR (1)
- Handzeichenerkennung (1)
- Hardware (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- Human Factors (1)
- Human orientation perception (1)
- Integrated circuit interconnections (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Management (1)
- Measurement (1)
- Mikrogravitation (1)
- Motion (1)
- Multimodal hyperspectral data (1)
- Multiuser (1)
- N200 (1)
- NVIDIA Tesla (1)
- Orientierung (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Perception (1)
- Personality (1)
- Pointing (1)
- Pointing devices (1)
- Pose Estimation (1)
- Radfahren (1)
- Raumfahrt (1)
- Raumwahrnehmung (1)
- Reasoning (1)
- Reversible Logic Synthesis (1)
- RuleML (1)
- SLIDE algorithm (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Second Life (1)
- Sense of presence (1)
- Shadow detection (1)
- Signal detection (1)
- Signal processing (1)
- Social Virtual Reality (1)
- Somatogravic Illusion (1)
- Standards (1)
- Supervised classification (1)
- Swim Stroke Analysis (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Three-dimensional displays (1)
- Tracking (1)
- Traffic Simulations (1)
- Usability (1)
- VR (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- Virtual Agents (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual environments (1)
- Virtual reality (1)
- Visual perception (1)
- Wahrnehmung (1)
- Watermarking (1)
- Weltraumforschung (1)
- XML (1)
- XNA Game Studio (1)
- XSLT (1)
- Zentrifuge (1)
- adaptive agents (1)
- analysis (1)
- automation (1)
- background motion (1)
- bicycle (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- computational logic (1)
- computer games (1)
- cooperative path planning (1)
- correlation (1)
- data logging (1)
- data visualisation (1)
- directed hypergraphs (1)
- distance perception (1)
- distributed processing (1)
- fiducial marker (1)
- fpga (1)
- graphs (1)
- gravito-inertial force (1)
- head down bed rest (1)
- heat shrink tubes (1)
- image sequence processing (1)
- immersive Visualisierung (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interactive computer graphics (1)
- interactive distributed rendering (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcomputers (1)
- microcontroller (1)
- mixed reality (1)
- monitoring (1)
- mood (1)
- morphological operator (1)
- motion estimation (1)
- motion platform (1)
- motion trajectory enhancement (1)
- multi-screen visualization environments (1)
- multi-user VR (1)
- multiple Xbox 360 (1)
- multiple computer systems (1)
- multiresolution analysis (1)
- neural networks (1)
- neuro-cognitive performance (1)
- noise suppression (1)
- optic flow (1)
- optical tracking (1)
- perception of upright (1)
- physical activity (1)
- physical model immersive (1)
- prefrontal cortex (1)
- projection (1)
- rendering (computer graphics) (1)
- robotic arm (1)
- robotic evaluation (1)
- rules (1)
- screens (display) (1)
- security (1)
- self-motion perception (1)
- semantic image seg-mentation (1)
- simulator (1)
- space flight analog (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user input (1)
- user interaction (1)
- vection (1)
- vestibular system (1)
- virtual environments (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- workday breaks (1)
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.
Für die prototypische Erstellung von Virtual Reality (VR) Szenen auf Grundlage realer Umgebungen bieten sich Daten aus aktuellen Panorama-Kameras an. Diese Daten eignen sich jedoch nicht unmittelbar für die Integration in eine Game Engine. Wir stellen daher ein projektionsbasiertes Verfahren vor, mit dem Bilder und Videos im Fischaugenformat, wie sie z.B. die 360 Kamera Ricoh Theta erstellt, ohne Konvertierung in Echtzeit mit Hilfe der Unity Game Engine visualisiert werden können. Es wird weiterhin gezeigt, dass ein Panoramabild mit diesem Verfahren leicht manuell um grobe Tiefeninformation erweitert werden kann, sodass bei einer Darstellung in VR ein grober räumlicher Eindruck der Szene für einfach prototypische Umsetzungen ermöglicht wird.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Traditionally traffic simulations are used to predict traffic jams, plan new roads or highways, and estimate road safety. They are also used in computer games and virtual environments. There are two general concepts of modeling traffic: macroscopic and microscopic modeling. Macroscopic traffic models take vehicle collectives into account and do not consider individual vehicles. Parameters like average velocity and density are used to model the flow of traffic. In contrast, microscopic traffic models consider each vehicle individually. Therefore, vehicle specific parameters are of importance, e.g. current velocity, desired velocity, velocity difference to the lead vehicle, individual time gap.
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.