Refine
Department, Institute
- Fachbereich Informatik (112)
- Institute of Visual Computing (IVC) (98)
- Institut für Sicherheitsforschung (ISF) (7)
- Institut für funktionale Gen-Analytik (IFGA) (6)
- Fachbereich Angewandte Naturwissenschaften (3)
- Fachbereich Elektrotechnik, Maschinenbau, Technikjournalismus (3)
- Fachbereich Wirtschaftswissenschaften (2)
Document Type
- Conference Object (84)
- Article (14)
- Report (12)
- Part of a Book (6)
Year of publication
Keywords
- FPGA (3)
- Image Processing (3)
- Virtual Reality (3)
- virtual reality (3)
- Hyperspectral image (2)
- Intelligent virtual agents (2)
- Raman microscopy (2)
- Virtuelle Realität (2)
- Visualization (2)
- computer vision (2)
Maximal strength testing of different muscle groups is a standard procedure in human physiology experiments. Test subjects have to exert maximum force voluntarily and are verbally encouraged by the investigator. The performance of the subjects is influenced by the verbal encouragement, but the encouragement procedure is not standardized or reproducible. To counter this problem a game-based motivation system prototype is developed to provide instant feedback to the subjects and also incentives to motivate them. The prototype was developed for the Biodex System 3 Isokinetic Dynamometer to improve the peak torque performance in an isometric knee extensor strength examination. Data analysis is performed on torque data from an existing study to understand torque response characteristics of different subjects. The parameters identified in the data analysis are used to design a shark-fish predator-prey game. The game depends on data obtained from the dynamometer in real time. A first evaluation shows that the game rewards and motivates the subject continuously over a repetition to reach the peak torque value. It also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increases the torque to reach the peak value.
For almost all modern means of transportation (car, train, airplane) driving simulators exist that provide realistic models of complex traffic situations under defined laboratory conditions. For many years, these simulators have been successfully used for drivers’ training and education and have considerably contributed to the overall road safety. Unfortunately, there is no such advanced system for the bicycle, although the number of bike accidents has been increasing against the common trend during the last decade. Hence the objective of this project is to design a real bicycle simulator that is able to generate any desired traffic situation within an immersive visualization environment. For this purpose the bike is mounted onto a motion platform with six degrees of freedom that enables a close-to-reality simulation of external forces acting on the bike. This system is surrounded by three projection walls displaying a virtual scenario.
Many modern computer games take place in urban environments. One important fact about such environments is that there are plenty of other people around who might react to whatever the user does. For example in the "FIVIS" project a bicycle simulator is developed that will serve (beside other applications) as an instrument for traffic education which strongly requires other traffic participants reacting to the users actions (see http://www.fivis.eu and [7]).
This contribution describes our current work on a general approach for an interactive agent system for urban environments.
Video Surveillance is in the center of research due to high importance of safety and security issues. Usually, humans have to monitor an area and often they have to do this for 24 hours a day. Thus, it would be desirable to have automatic surveillance systems that support this job automatically. The system described in this paper is such an automatic surveillance system that has been developed to detect several dangerous situations in a subway station. This paper discusses the high-level module of the system. Herein, an expert system is used to detect events.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures.
The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square.
The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points.[29] For a reliable estimation a robust computation of the BLOB's center-points is necessary.
This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
Reversible logic synthesis is an emerging research topic with different application areas like low-power CMOS design, quantum- and optical computing. The key motivation behind reversible logic synthesis is the optimization of the heat dissipation problem current architectures show, by reducing it to theoretically zero [2].
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
In interactive visualization environments which use a multiple screen setup, every output device has to be supplied frequently with video information. Such virtual environments often use large projection screens, which require high resolution video data. When increasing scene complexity, it can become a challenge to equip a single computer with graphics hardware powerful enough for the task. An efficient approach is to distribute the workload to multiple low-cost computer systems such as game consoles. Nowadays' game consoles are very powerful and specialized for interactive graphics applications, therefore they are suitable to being used for rendering purposes. A framework has been developed that builds on Microsoft's XNA Game Studio. It enables for interactive distributed rendering on multiple Xbox 360 systems and PCs. Tasks such as game logic synchronization, network session management are fully handled by the framework. A game built with it can focus mostly on its own implementation. The framework's structure follows that of the XNA Game Studio, which allows porting existing game projects quickly. Our evaluation showed that the framework is a lightweight solution which leaves almost the full CPU time to the actual game.
This contribution introduces a method for partially rigid 3D registration of high resolution magnetic resonance (MR) images of the eye globes (EG) and the optic nerve sheaths (ONS) based on the reconstruction of their 3D models. Conventional registration methods do not preserve anatomical structures in such a way that quantitative anatomical comparisons could be computed. Therefore, iterative closest points (ICP) registration method has been extended to enable partial rigid registration (PICP) of flexible tissue structures within certain spatially limited areas. The results of the proposed approach are compared with the non-linear registration method ART. It was shown that PICP approach considerably improved the matching quality of local tissue and preserved anatomical structure at the same time.
Video surveillance has become a hot research topic due to the recently increased importance of safety and security issues. Usually, security personnel has to monitor a surveillance area and often they have to do this for 24 h a day. Thus, it would be desirable to develop intelligent surveillance systems that support this task automatically. The system described in this contribution is thought of such an automatic surveillance system that has been developed to detect several dangerous situations in subway stations. The workflow and the most important steps from foreground segmentation, shadow detection, tracking and classification to event detection are described, discussed and evaluated in detail. The developed surveillance system yields satisfying results, as dangerous situations that need to be recognized are detected in most cases.
How Does Self-Perception Influence the Choice of Study? E-Portfolio and Gender Issues in Informatics
(2009)
Der Einsatz von Agentensystemen ist vielfältig, dennoch sind aktuelle Realisierungen lediglich in der Lage primär regelkonformes oder aber „geskriptetes“ Verhalten auch unter Einsatz von randomisierten Verfahren abzubilden. Für eine realistische Repräsentation sind jedoch auch Abweichungen von den Regeln notwendig, die nicht zufällig sondern kontextbedingt auftreten. Im Rahmen dieses Forschungsprojektes wurde ein realitätsnaher Straßenverkehrssimulator realisiert, der mittels eines detailliert definierten Systems für kognitive Agenten auch diese irregulären Verhaltensweisen generiert und somit ein realistisches Verkehrsverhalten für die Verwendung in VR-Anwendungen simuliert. Durch das Erweitern der Agenten mit psychologischen Persönlichkeitsprofilen, basierend auf dem „Fünf-Faktoren-Modell“, zeigen die Agenten individualisierte und gleichzeitig konsistente Verhaltensmuster. Ein dynamisches Emotionsmodell sorgt zusätzlich für eine situationsbedingte Adaption des Verhaltens, z.B. bei langen Wartezeiten. Da die detaillierte Simulation kognitiver Prozesse, der Persönlichkeitseinflüsse und der emotionalen Zustände erhebliche Rechenleistungen verlangt, wurde ein mehrschichtiger Simulationsansatz entwickelt, der es erlaubt den Detailgrad der Berechnung und Darstellung jedes Agenten während der Simulation stufenweise zu verändern, so dass alle im System befindlichen Agenten konsistent simuliert werden können. Im Rahmen diverser Evaluierungsiterationen in einer bestehenden VR-Anwendung – dem FIVIS-Fahrradfahrsimulator des Antragstellers - konnte eindrucksvoll nachgewiesen werden, dass die realisierten Konzepte die ursprünglich formulierten Forschungsfragestellung überzeugend und effizient lösen.
Estimation of camera motion from RGB-D images has been an active research topic in recent years. Several RGB-D visual odometry systems were reported in literature and released under open-source licenses. The objective of this contribution is to evaluate the recently published approaches to motion estimation. A publicly available dataset of RGB-D sequences with precise ground truth data is applied and results are compared and discussed. Experiments on a mobile robot used in the RoboCup@Work league are discussed as well. The system showing the best performance is capable of estimating the motion with drift as small as 1 cms under special conditions, though it has been proven to be robust against shakey motion and moderately non-static scenes.
The objective of the FIVIS project is to develop a bicycle simulator which is able to simulate real life bicycle ride situations as a virtual scenario within an immersive environment. A sample test bicycle is mounted on a motion platform to enable a close to reality simulation of turns and balance situations. The visual field of the bike rider is enveloped within a multi-screen visualisation environment which provides visual data relative to the motion and activity of the test bicycle. That means the bike rider has to pedal and steer the bicycle as a usual bicycle, while the motion is recorded and processed to control the simulation. Furthermore, the platform is fed with real forces and accelerations that have been logged by a mobile data acquisition system during real bicycle test drives. Thus, using a feedback system makes the movements of the platform match to the virtual environment and the reaction of the driver (e.g. steering angle, step rate).
Der Mutterpass wurde als wichtiges Vorsorgeinstrument für Schwangere Anfang der sechziger Jahre in Papierform eingeführt. Er wird bei 90% aller Schwangerschaften genutzt. Seit seiner Einführung im Jahre 1968 hat jedoch die Komplexität der Vorsorgeuntersuchungen zugenommen, wie auch die Begleitumstände einer Schwangerschaft häufig komplexer geworden sind. Dies war Anlass dafür, die elektronische Abbildung des Papier basierten Mutterpasses zu entwickeln, um den gewachsenen Anforderungen der medizinischen Dokumentation und Evaluation gerecht zu werden. Eine große Herausforderung bei der Konzeption und Entwicklung des elektronischen Mutterpasses war dabei die Definition eines strukturierten und maschinenlesbaren Austauschformates. Darüber hinaus mussten weltweit neue eindeutige Identifier entwickelt werden, um den Mutterpass elektronisch abzubilden. Nach der prototypischen Realisierung einer vollständigen Version wurde im Frühjahr 2008 die Pilotierung in der Metropolregion Rhein-Neckar begonnen.
"Visual Computing" (VC) fasst als hochgradig aktuelles Forschungsgebiet verschiedene Bereiche der Informatik zusammen, denen gemeinsam ist, dass sie sich mit der Erzeugung und Auswertung visueller Signale befassen. Im Fachbereich Informatik der FH Bonn-Rhein-Sieg nimmt dieser Aspekt eine zentrale Rolle in Lehre und Forschung innerhalb des Studienschwerpunktes Medieninformatik ein. Drei wesentliche Bereiche des VC werden besonders in diversen Lehreinheiten und verschiedenen Projekten vermittelt: Computergrafik, Bildverarbeitung und Hypermedia-Anwendungen. Die Aktivitäten in diesen drei Bereichen fließen zusammen im Kontext immersiver virtueller Visualisierungsumgebungen.
Usually, the first processing step in computer vision systems consists of a spatial convolution with only a few simple filters. Therefore, information is lost of it is not represented explicitly for the following processing steps. This paper proposes a new hierarchical filter scheme that can efficiently synthesize the responses for a large number of specific filters. The scheme is based on steerable filters. It also allows for an efficient on-line adjustment of the trade off between the speed and the accuracy of the filters. We apply this method to the detection of facial keypoints, especially the eye corners. These anatomically defined keypoints exhibit a large variability in their corresponding image structures so that a flexible low level feature extraction is required
GL-Wrapper for Stereoscopic Rendering of Standard Applications for a PC-based Immersive Environment
(2007)
We present a multi-user virtual reality (VR) setup that aims at providing novel training tools for paramedics that enhances current learning methods. The hardware setup consists of a two-user fullscale VR environment with head-mounted displays for two interactive trainees and one additional desktop pc for one trainer participant. The software provides a connected multi-user environment, showcasing a paramedic emergency simulation with focus on anaphylactic shock, a representative scenario for critical medical cases that happen too rare to eventually occur within a regular curricular term of vocational training. The prototype offers hands-on experience on multi-user VR in an applied scenario, generating discussion around current state and future development concerning three important research areas: (a) user navigation, (b) interaction, (c) level of visual abstraction, and (d) level of task abstraction.
In this work, we automatically distinguish the efficient high elbow pose from dropping one in pulling phase of front crawl stroke in front view amateurly recorded videos. This task is challenging due to the aquatic environment and missing depth information. We predict the pull’s efficiency through multiclass svm and random forest classifiers given arms key positions and angles as the feature set. We evaluate our approach over a labeled dataset of video frames taken from 25 members of masters’ swim club at Ryerson University with different levels of expertise and physiological characteristics. Our results show the effectiveness of our approach with random forest classifier, yielding 67% accuracy.