Refine
H-BRS Bibliography
- yes (105)
Departments, institutes and facilities
- Institute of Visual Computing (IVC) (105) (remove)
Document Type
- Conference Object (72)
- Article (17)
- Report (11)
- Part of a Book (5)
Year of publication
Keywords
- FPGA (3)
- Virtual Reality (3)
- Hyperspectral image (2)
- Image Processing (2)
- Intelligent virtual agents (2)
- Perceptual Upright (2)
- Raman microscopy (2)
- Serious Games (2)
- Virtuelle Realität (2)
- computer vision (2)
- image fusion (2)
- machine vision (2)
- pansharpening (2)
- serious games (2)
- AR (1)
- Adaptive Behavior (1)
- Agents (1)
- Algorithms (1)
- Aufrecht (1)
- BLOB Detection (1)
- Blob Detection (1)
- Bounding Box (1)
- Cell/B.E. (1)
- Center-of-Mass (1)
- Centrifugation (1)
- Chemical imaging (1)
- Datalog (1)
- Displacement (1)
- EEG (1)
- ERP (1)
- Educational institutions (1)
- Emotion (1)
- Event detection (1)
- Exercise (1)
- Expert system (1)
- Eye Tracking (1)
- FIVIS (1)
- Fahrradfahrsimulator (1)
- Fahrsimulator (1)
- Five Factor Model (1)
- Foreground segmentation (1)
- Forschungsbericht (1)
- Games and Simulations for Learning (1)
- Gaze Behavior (1)
- Gefahrenprävention (1)
- Gender Issues in Computer Science Education (1)
- Grailog (1)
- Gravitation (1)
- HDBR (1)
- Handzeichenerkennung (1)
- Head-mounted Display (1)
- Heat shrink tubing (1)
- Human Factors (1)
- Human orientation perception (1)
- Interaktion (1)
- Internet (1)
- Interoperability (1)
- Inventory (1)
- JavaScript (1)
- Künstliche Gravitation (1)
- Management (1)
- Measurement (1)
- Mikrogravitation (1)
- Motion (1)
- Multimodal hyperspectral data (1)
- Multiuser (1)
- N200 (1)
- NVIDIA Tesla (1)
- Orientierung (1)
- P300 (1)
- Parallel Processing (1)
- Parallelization (1)
- Pattern recognition (1)
- Perception (1)
- Personality (1)
- Pointing (1)
- Pointing devices (1)
- Pose Estimation (1)
- Radfahren (1)
- Raumfahrt (1)
- Raumwahrnehmung (1)
- Reasoning (1)
- Reversible Logic Synthesis (1)
- RuleML (1)
- SVG (1)
- Saccades (1)
- Saccadic suppression (1)
- Second Life (1)
- Sense of presence (1)
- Shadow detection (1)
- Social Virtual Reality (1)
- Somatogravic Illusion (1)
- Standards (1)
- Supervised classification (1)
- Swim Stroke Analysis (1)
- Synthetic perception (1)
- SystemVerilog (1)
- Three-dimensional displays (1)
- Tracking (1)
- Traffic Simulations (1)
- Usability (1)
- VR (1)
- Verilog (1)
- Verkehrserziehung (1)
- Verkehrssimulation (1)
- Vibrational microspectroscopy (1)
- Video surveillance (1)
- Virtual Agents (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual environments (1)
- Virtual reality (1)
- Visual perception (1)
- Visualization (1)
- Wahrnehmung (1)
- Weltraumforschung (1)
- XML (1)
- XSLT (1)
- Zentrifuge (1)
- adaptive agents (1)
- analysis (1)
- automation (1)
- bicycle (1)
- brightfield microscopy (1)
- bus load (1)
- camera (1)
- can bus (1)
- computational logic (1)
- cooperative path planning (1)
- data logging (1)
- directed hypergraphs (1)
- fiducial marker (1)
- fpga (1)
- graphs (1)
- gravito-inertial force (1)
- head down bed rest (1)
- heat shrink tubes (1)
- immersive Visualisierung (1)
- infrared pattern (1)
- intelligente virtuelle Agenten (1)
- interactive computer graphics (1)
- measurement (1)
- medical training (1)
- mesoscopic agents (1)
- microcontroller (1)
- mixed reality (1)
- monitoring (1)
- mood (1)
- motion platform (1)
- multi-user VR (1)
- multiresolution analysis (1)
- neural networks (1)
- neuro-cognitive performance (1)
- optical tracking (1)
- perception of upright (1)
- physical activity (1)
- physical model immersive (1)
- prefrontal cortex (1)
- robotic arm (1)
- robotic evaluation (1)
- rules (1)
- security (1)
- semantic image seg-mentation (1)
- simulator (1)
- space flight analog (1)
- subjective visual vertical (1)
- submillimeter precision (1)
- un-manned aerial vehicle (1)
- unmanned ground vehicle (1)
- user input (1)
- user interaction (1)
- vestibular system (1)
- virtual reality (1)
- virtuelle Umgebungen (1)
- visual quality control (1)
- workday breaks (1)
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
In this contribution, we describe the activities and promotion programs installed at the Bonn-Rhein-Sieg University as an institution and at the Department of Computer Science respectively for increasing the total number of computer science students and in particular the female rate. We report about our experiences in addressing gender aspects in education and try to evaluate the outcome of our programs with respect to our equal rights for women strategy. We propose a closer look at mental self-theories enabled by E-portfolios to address also gender issues in Computer Science. Moreover, reasons are identified and discussed which may be responsible for the reduced interest in particular of female young adults to choose a computer science study program.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
In this contribution a machine vision inspection system is presented which is designed as a length measuring sensor. It is developed to be applied to a range of heat shrink tubes, varying in length, diameter and color. The challenges of this task were the precision and accuracy demands as well as the real-time applicability of the entire approach since it should be realized in regular industrial line production. In production, heat shrink tubes are cut to specific sizes from a continuous tube. A multi-measurement strategy has been developed, which measures each individual tube segment several times with sub pixel accuracy while being in the visual field. The developed approach allows for a contact-free and fully automatic control of 100% of produced heat shrink tubes according to the given requirements with a measuring precision of 0.1mm. Depending on the color, length and diameter of the tubes considered, a true positive rate of 99.99% to 100% has been reached at a true negative rate of > 99.7.
"Visual Computing" (VC) fasst als hochgradig aktuelles Forschungsgebiet verschiedene Bereiche der Informatik zusammen, denen gemeinsam ist, dass sie sich mit der Erzeugung und Auswertung visueller Signale befassen. Im Fachbereich Informatik der FH Bonn-Rhein-Sieg nimmt dieser Aspekt eine zentrale Rolle in Lehre und Forschung innerhalb des Studienschwerpunktes Medieninformatik ein. Drei wesentliche Bereiche des VC werden besonders in diversen Lehreinheiten und verschiedenen Projekten vermittelt: Computergrafik, Bildverarbeitung und Hypermedia-Anwendungen. Die Aktivitäten in diesen drei Bereichen fließen zusammen im Kontext immersiver virtueller Visualisierungsumgebungen.
Neutral buoyancy has been used as an analog for microgravity from the earliest days of human spaceflight. Compared to other options on Earth, neutral buoyancy is relatively inexpensive and presents little danger to astronauts while simulating some aspects of microgravity. Neutral buoyancy removes somatosensory cues to the direction of gravity but leaves vestibular cues intact. Removal of both somatosensory and direction of gravity cues while floating in microgravity or using virtual reality to establish conflicts between them has been shown to affect the perception of distance traveled in response to visual motion (vection) and the perception of distance. Does removal of somatosensory cues alone by neutral buoyancy similarly impact these perceptions? During neutral buoyancy we found no significant difference in either perceived distance traveled nor perceived size relative to Earth-normal conditions. This contrasts with differences in linear vection reported between short- and long-duration microgravity and Earth-normal conditions. These results indicate that neutral buoyancy is not an effective analog for microgravity for these perceptual effects.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
Maintaining orientation in an environment with non-Earth gravity (1 g) is critical for an astronaut's operational performance. Such environments present a number of complexities for balance and motion. For example, when an astronaut tilts due to ascending or descending an inclined plane on the moon, the gravity vector will be tilted correctly, but the magnitude will be different from on earth. If this results in a mis-perceived tilt, then that may lead to postural and perceptual errors, such as mis-perceiving the orientation of oneself or the ground plane and corresponding errors in task judgment.
The perceived distance of self motion induced in a stationary observer by optic flow is overestimated (Redlick et al., Vis Res. 2001 41: 213). Here we assessed how different components of translational optic flow contribute to perceived distance traveled. Subjects sat on a stationary bicycle in front of a virtual reality display that extended beyond 90deg on each side. They monocularly viewed a target presented in a virtual hallway wallpapered with stripes that changed colour to prevent tracking individual stripes. Subjects then looked centrally or 30, 60 or 90° eccentrically while their view was restricted to an ellipse with faded edges (25 x 42deg) centered on their fixation. Subjects judged when they had reached the target’s remembered position. Perceptual gain (perceived/actual distance traveled) was highest when subjects were looking in a direction that depended on the simulated speed of motion. Results were modeled as the sum of separate mechanisms sensitive to radial and laminar optic flow. In our display distances were perceived as compressed. However, there was no correlation between perceptual compression and perceived speed of motion. These results suggest that visually induced self motion in virtual displays can be subject to large but predictable error.
Perception is one of the most important cognitive capabilities of an entity since it determines how an entity perceives its environment. The presented work focuses on providing cost efficient but realistic perceptual processes for intelligent virtual agents (IVAs) or NPCs with the goal of providing a sound information basis for the entities' decision making processes. In addition, an agent-central perception process should rovide a common interface for developers to retrieve data from the IVAs' environment. The overall process is evaluated by applying it to a scenario demonstrating its benefits. The evaluation indicates, that such a realistically simulated perception process provides a powerful instrument to enhance the (perceived) realism of an IVA's simulated behavior.
The application of Raman and infrared (IR) microspectroscopy is leading to hyperspectral data containing complementary information concerning the molecular composition of a sample. The classification of hyperspectral data from the individual spectroscopic approaches is already state-of-the-art in several fields of research. However, more complex structured samples and difficult measuring conditions might affect the accuracy of classification results negatively and could make a successful classification of the sample components challenging. This contribution presents a comprehensive comparison in supervised pixel classification of hyperspectral microscopic images, proving that a combined approach of Raman and IR microspectroscopy has a high potential to improve classification rates by a meaningful extension of the feature space. It shows that the complementary information in spatially co-registered hyperspectral images of polymer samples can be accessed using different feature extraction methods and, once fused on the feature-level, is in general more accurately classifiable in a pattern recognition task than the corresponding classification results for data derived from the individual spectroscopic approaches.
This contribution presents an easy to implement 3D tracking approach that works with a single standard webcam. We describe the algorithm and show that it is well suited for being used as an intuitive interaction method in 3D video games. The algorithm can detect and distinguish multiple objects in real-time and obtain their orientation and position relative to the camera. The trackable objects are equipped with planar patterns of five visual markers. By tracking (stereo) glasses worn by the user and adjusting the in-game camera's viewing frustum accordingly, the well-known immersive "screen as a window" effect can be achieved, even without the use of any special tracking equipment.