Fachbereich Informatik
Refine
H-BRS Bibliography
- yes (1148)
Departments, institutes and facilities
- Fachbereich Informatik (1148)
- Institute of Visual Computing (IVC) (277)
- Institut für Cyber Security & Privacy (ICSP) (132)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (88)
- Institut für funktionale Gen-Analytik (IFGA) (67)
- Fachbereich Ingenieurwissenschaften und Kommunikation (46)
- Institut für Sicherheitsforschung (ISF) (38)
- Graduierteninstitut (21)
- Fachbereich Angewandte Naturwissenschaften (11)
- Fachbereich Wirtschaftswissenschaften (9)
Document Type
- Conference Object (606)
- Article (265)
- Report (77)
- Part of a Book (50)
- Preprint (50)
- Book (monograph, edited volume) (32)
- Doctoral Thesis (22)
- Conference Proceedings (18)
- Research Data (11)
- Master's Thesis (7)
Year of publication
Keywords
- Virtual Reality (13)
- Robotics (12)
- Machine Learning (10)
- Usable Security (10)
- virtual reality (10)
- 3D user interface (7)
- Quality diversity (7)
- Augmented Reality (6)
- Lehrbuch (6)
- Navigation (6)
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Updating a shared data structure in a parallel program is usually done with some sort of high-level synchronization operation to ensure correctness and consistency. However, underlying synchronization instructions in a processor architecture are costly and rather limited in their scalability on larger multi-core/multi-processors systems. In this paper, we examine work queue operations where such costly atomic update operations are replaced with non-atomic modifiers (simple read+write). In this approach, we trade the exact amount of work with atomic operations against doing more and redundant work but without atomic operations and without violating the correctness of the algorithm. We show results for the application of this idea to the concrete scenario of parallel Breadth First Search (BFS) algorithms for undirected graphs on two large NUMA shared memory system with up to 64 cores.
Information reliability and automatic computation are two important aspects that are continuously pushing the Web to be more semantic. Information uploaded to the Web should be reusable and extractable automatically to other applications, platforms, etc. Several tools exist to explicitly markup Web content. The Web services may also have a positive role on the automatic processing of Web contents, especially when they act as flexible and agile agents. However, Web services themselves should be developed with semantics in mind. They should include and provide structured information to facilitate their use, reuse, composition, query, etc. In this chapter, the authors focus on evaluating state-of-the-art semantic aspects and approaches in Web services. Ultimately, this contributes to the goal of Web knowledge management, execution, and transfer.
People have dreamed of machines, which would free them from unpleasant, dull, dirty and dangerous tasks and work for them as servants, for centuries if not millennia. Service robots seem to finally let these dreams come true. But where are all these robots that eventually serve us all day long, day for day? A few service robots have entered the market: domestic and professional cleaning robots, lawnmowers, milking robots, or entertainment robots. Some of these robots look more like toys or gadgets rather than real robots. But where is the rest? This is a question, which is asked not only by customers, but also by service providers, care organizations, politicians, and funding agencies. The answer is not very satisfying. Today’s service robots have their problems operating in everyday environments. This is by far more challenging than operating an industrial robot behind a fence. There is a comprehensive list of technical and scientific problems, which still need to be solved. To advance the state of the art in service robotics towards robots, which are capable of operating in an everyday environment, was the major objective of the DESIRE project (Deutsche Service Robotik Initiative – Germany Service Robotics Initiative) funded by the German Ministry of Education and Research (BMBF) under grant no. 01IME01A. This book offers a sample of the results achieved in DESIRE.
Open Source ERP-Systeme
(2012)
Mit Free and Open Source Software können die IT-Kosten in erheblichem Umfang gesenkt werden. Wegen ihres hohen Durchdringungsgrades in Unternehmen und des damit verbundenen Kostenblocks gilt dies insbesondere für Free and Open Source (FOS-) ERP-Systeme. Zwar sind die Verbreitung und die Akzeptanz von FOS-ERP-Systemen in den letzten Jahren schon stark angewachsen, durch eine verbesserte Markttransparenz lassen sich aber noch weitere Potenziale erschließen. Bestehende Marktübersichten für FOS-ERP-Systeme sind jedoch wenig umfassend. Vor diesem Hintergrund wurde ein Marktspiegel mit detaillierten Angaben zu den verschiedenen FOS-ERP-Systemen erstellt.
ERP systems are being used throughout the whole enterprise and are therefore responsible for a high percentage of IT expenses. The use of Free and Open Source ERP systems (FOS ERP systems) can help to reduce these IT costs. Though the acceptance of FOS ERP systems has increased enormously in the last years, even more entreprises would use FOS ERP systems to support their order processing, if the FOS ERP market was more transparent. Existing market surveys are less comprehensive. Therefore, a detailed market guide was developed.
We present our approach to extend a Virtual Reality software framework towards the use for Augmented Reality applications. Although VR and AR applications have very similar requirements in terms of abstract components (like 6DOF input, stereoscopic output, simulation engines), the requirements in terms of hardware and software vary considerably. In this article we would like to share the experience gained from adapting our VR software framework for AR applications. We will address design issues for this task. The result is a VR/AR basic software that allows us to implement interactive applications without fixing their type (VR or AR) beforehand. Switching from VR to AR is a matter of changing the configuration file of the application. We also give an example of the use of the extended framework: Augmenting the magnetic field of bar magnets in physics classes. We describe the setup of the system and the real-time calculation of the magnetic field, using a GPU.
Using virtual environment systems for road safety education requires a realistic simulation of road traffic. Current traffic simulations are either too restricted in their complexity of agent behavior or focus on aspects not important in virtual environments. More importantly, none of them are concerned with modeling misbehavior of traffic participants which is part of every-day traffic and should therefore not be neglected in this context. We present a concept for a traffic simulation that addresses the need for more realistic agent behavior with regard to road safety education. The two major components of this concept are a simulation of persistent agents which minimizes computational overhead and a model of cognitive processes of human drivers combined with psychological personality profiles to allow for individual behavior and misbehavior.
This paper compares the memory allocation of two Java virtual machines, namely Oracle Java HotSpot VM 32-bit (OJVM) and Jamaica JamaicaVM (JJVM). The basic difference of the architectures in both machines is that the JamaicaVM uses fixed-size blocks for allocating objects on the heap. The basic difference of the architectures is that the JJVM uses fixed size block allocation on the heap. This means that objects have to be split into several connected blocks if they are bigger than the specified block-size. On the other hand, for small objects a full block must be allocated. The paper contains both theoretical and experimental analysis on the memory-overhead. The theoretical analysis is based on specifications of the two virtual machines. The experimental analysis is done with a modified JVMTI Agent together with the SPECjvm2008 Benchmark.
This paper describes adaptive time frequency analysis of EEG signals, both in theory as well as in practice. A momentary frequency estimation algorithm is discussed and applied to EEG time series of test persons performing a concentration experiment. The motivation for deriving and implementing a time frequency estimator is the assumption that an emotional change implies a transient in the measured EEG time series, which again are superimposed by biological white noise as well as artifacts. It will be shown how accurately and robustly the estimator detects the transient even under such complicated conditions.
Along with the success of the digitally revived stereoscopic cinema, other events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
The relative contributions of radial and laminar optic flow to the perception of linear self-motion
(2012)
When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40° (h) × 24° (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.
Interactive Distributed Rendering of 3D Scenes on Multiple Xbox 360 Systems and Personal Computers
(2012)
For the case when the abstraction of instantaneous state transitions is adopted, this paper proposes to start fault detection and isolation in an engineering system from a single time-invariant causality bond graph representation of a hybrid model. To that end, the paper picks up on a long-known proposal to model switching devices by a transformer modulated by a Boolean variable and a resistor in fixed conductance causality accounting for its ON resistance. Bond graph representations of hybrid system models developed in this way have been used so far mainly for the purpose of simulation. The paper shows that they can well constitute an approach to the bond-graph-based quantitative fault detection and isolation of hybrid models. Advantages are that the standard sequential causality assignment procedure can be a used without modification. A single set of analytical redundancy relations valid for all physically feasible system modes can be (automatically) derived from the bond graph. Stiff model equations due to small values of the ON resistance in the switch model may be avoided by symbolic reformulation of equations and letting the ON resistance of some switches tend to zero, turning them into ideal switches.
First, for two examples considered in the literature, it is shown that the approach proposed in this paper can produce the same analytical redundancy relations as were obtained from a hybrid bond graph with controlled junctions and the use of a sequential causality assignment procedure especially for fault detection and isolation purpose. Moreover, the usefulness of the proposed approach is illustrated in two case studies by its application to standard switching circuits extensively used in power electronic systems and by simulation of some fault scenarios. The approach, however, is not confined to the fault detection and isolation of such systems. Analytically validated simulation results obtained by means of the program Scilab give confidence in the approach.
A bond graph representation of switching devices known for a long time has been a modulated transformer with a modulus b(t)∈{0,1}∀t≥0 in conjunction with a resistor R:Ron accounting for the ON-resistance of a switch considered non-ideal. Besides other representations, this simple model has been used in bond graphs for simulation of the dynamic behaviour of hybrid systems. A previous article of the author has proposed to use the transformer–resistor pair in bond graphs for fault diagnosis in hybrid systems. Advantages are a unique bond graph for all system modes, the application of the unmodified standard Sequential Causality Assignment Procedure, fixed computational causalities and the derivation of analytical redundancy relations incorporating ‘Boolean’ transformer moduli so that they hold for all system modes. Switches temporarily connect and disconnect model parts. As a result, some independent storage elements may temporarily become dependent, so that the number of state variables is not time-invariant. This article addresses this problem in the context of modelling and simulation of fault scenarios in hybrid systems. In order to keep time-invariant preferred integral causality at storage ports, residual sinks previously introduced by the author are used. When two storage elements become dependent at a switching time instance ts, a residual sink is activated. It enforces that the outputs of two dependent storage elements become immediately equal by imposing the conjugate3 power variable of appropriate value on their inputs. The approach is illustrated by the bond graph modelling and simulation of some fault scenarios in a standard three-phase switched power inverter supplying power into an RL-load in a delta configuration. A well-developed approach to model-based fault detection and isolation is to evaluate the residual of analytical redundancy relations. In this article, analytical redundancy relation residuals have been computed numerically by coupling a bond graph of the faulty system to one of the non-faulty systems by means of residual sinks. The presented approach is not confined to power electronic systems but can be used for hybrid systems in other domains as well. In further work, the RL-load may be replaced by a bond graph model of an alternating current motor in order to study the effect of switch failures in the power inverter on to the dynamic behaviour of the motor.
Ein SLA (Service Level Agreement) legt alle Punkte einer vertraglichen Zusammenarbeit zwischen Unternehmen und Service Provider verbindlich fest. Ein SLA muss sorgfältig erstellt werden, um ein Vertrauensverhältnis zwischen beiden Seiten herzustellen. Dabei geht es um inhaltliche, organisatorische und technische Anforderungen sowie um eine exakte Festlegung auf verwendete Fachbegriffe und Leistungskriterien. Der vorliegende Beitrag beschreibt Punkt für Punkt den Inhalt eines SLA. Das sind u.a. die Benennung der Vetragspartner, die Leistungskriterien, die die Qualität des Dienstes sicherstellen sowie die Überwachung der Erbringung der vereinbarten Leistung und die Dauer des Vertrages.
We present the extensible post processing framework GrIP, usable for experimenting with screen space-based graphics algorithms in arbitrary applications. The user can easily implement new ideas as well as add known operators as components to existing ones. Through a well-defined interface, operators are realized as plugins that are loaded at run-time. Operators can be combined by defining a post processing graph (PPG) using a specific XML-format where nodes are the operators and edges define their dependencies. User-modifiable parameters can be manipulated through an automatically generated GUI. In this paper we describe our approach, show some example effects and give performance numbers for some of them.
We present a graph-based framework for post processing filters, called GrIP, providing the possibility of arranging and connecting compatible filters in a directed, acyclic graph for realtime image manipulation. This means that the construction of whole filter graphs is possible through an external interface, avoiding the necessity of a recompilation cycle after changes in post processing. Filter graphs are implemented as XML files containing a collection of filter nodes with their parameters as well as linkage (dependency) information. Implemented methods include (but are not restricted to) depth of field, depth darkening and an implementation of screen space shadows, all applicable in real-time, with manipulable parameterizations.
In dieser Arbeit wird eine Methode zur Darstellung und Generierung von natürlich wirkendem Bewuchs auf besonders großen Arealen und unter Berücksichtigung ökologischer Faktoren vorgestellt. Die Generierung und Visualisierung von Bewuchs ist aufgrund der Komplexität biologischer Systeme und des Detailreichtums von Pflanzenmodellen ein herausforderndes Gebiet der Computergrafik und ermöglicht es, den Realismus von Landschaftsvisualisierungen erheblich zu steigern. Aufbauend auf [DMS06] wird bei Silva der Bewuchs so generiert, dass die zur Darstellung benötigten Wang-Kacheln und die mit ihnen assoziierten Teilverteilungen wiederverwendet werden können. Dazu wird ein Verfahren vorgestellt, um Poisson Disk Verteilungen mit variablen Radien auf nahtlosen Wang-Kachelmengen ohne rechenintensive globale Optimierung zu erzeugen. Durch die Einbeziehung von Nachbarschaften und frei konfigurierbaren Generierungspipelines können beliebige abiotische und biotische Faktoren bei der Bewuchsgenerierung berücksichtigt werden. Die durch Silva auf Wang-Kacheln erzeugten Pflanzenverteilungen ermöglichen, die darauf aufgebauten beschleunigenden Datenstrukturen bei der Visualisierung wieder zu verwenden. Durch Multi-Level Instancing und eine Schachtelung von Kd-Bäumen ist eine Visualisierung von großen bewachsenen Arealen mit geringen Renderzeiten und geringem Memoryfootprint von Hunderten Quadratkilometern Größe möglich.
In diesem Beitrag wird der interaktive Volumenrenderer Volt für die NVIDIA CUDA Architektur vorgestellt. Die Beschleunigung wird durch das Ausnutzen der technischen Eigenschaften des CUDA Device, durch die Partitionierung des Algorithmus und durch die asynchrone Ausführung des CUDA Kernels erreicht. Parallelität wird auf dem Host, auf dem Device und zwischen Host und Device genutzt. Es wird dargestellt, wie die Berechnungen durch den gezielten Einsatz der Ressourcen effizient durchgeführt werden. Die Ergebnisse werden zurückkopiert, so dass der Kernel nicht auf dem zur Anzeige bestimmten Device ausgeführt werden muss. Synchronisation der CUDA Threads ist nicht notwendig.
This contribution describes an optical laser-based user interaction system designed for virtual reality (VR) environments. The project's objective is to realize a 6-DoF user input device for interaction with VR applications running in CAVE-type visualization environments with flat projections walls. In case of a back-projection VR system, in contrast to optical tracking systems, no camera has to be placed within the visualization environment. Instead, cameras observe patterns of laser beam projections from behind the screens. These patterns are emitted by a hand-held input device. The system is robust with respect to partial occlusion of the laser pattern. An inertial measurement unit is integrated into the device in order to improve robustness and precision.
Nowadays Field Programmable Gate Arrays (FPGA) are used in many fields of research, e.g. to create prototypes of hardware or in applications where hardware functionality has to be changed more frequently. Boolean circuits, which can be implemented by FPGAs are the compiled result of hardware description languages such as Verilog or VHDL. Odin II is a tool, which supports developers in the research of FPGA based applications and FPGA architecture exploration by providing a framework for compilation and verification. In combination with the tools ABC, T-VPACK and VPR, Odin II is part of a CAD flow, which compiles Verilog source code that targets specific hardware resources. This paper describes the development of a graphical user interface as part of Odin II. The goal is to visualize the results of these tools in order to explore the changing structure during the compilation and optimization processes, which can be helpful to research new FPGA architectures and improve the workflow.
Having multiple talkers on a bus system rises the bandwidth on this bus. To monitor the communication on a bus, tools that constantly read the bus are needed. This report shows an implementation of a monitoring system for the CAN bus utilizing the Altera DE2 development board. The Biomedical Institute of the University of New Brunswick is currently developing together with different partners a prosthetic limb device, the UNB hand. Communication in this device is done via two CAN buses, which operate at a bit-rate of 1 Mbit/s. The developed monitoring system has been completely designed in Verilog HDL. It monitors the CAN bus in real-time and allows monitoring of different modules as well as of the overall load. The calculated data is displayed on the built-in LCD and also transmitted via UART to a PC. A sample receiver programmed in C is also given. The evaluation of this system has been done by using the Microchip CAN Bus Analyzer Tool connected to the GPIO port of the development board that simulates CAN communication.
Fuzzelarbeit: Identifizierung unbekannter Sicherheitslücken und Software-Fehler durch Fuzzing
(2011)
Fuzzing als toolgestützte Identifizierung von Sicherheitslücken wird in der Regel im letzten Stadium der Softwareentwicklung zum Einsatz kommen. Es eignet sich zur Suche nach Sicherheitslücken in jeder Art Software. Die Robustheit der untersuchten Zielsoftware wird beim Fuzzing mit zielgerichteten, unvorhergesehenen Eingabedaten überprüft. Der Fuzzing-Prozess wird im Artikel beschrieben, ebenso die Taxometrie von Fuzzern, die in "dumme" und "intelligente" Fuzzer eingeteilt werden. Die Identifizierung von Sicherheitslücken oder Fehlern in der Zielsoftware erfolgt durch ein umfassendes Monitoring (Debugger, Profiler, Tracker). Die meist große Zahl identifizierter Schwachstellen und Verwundbarkeiten macht eine Bewertung jeder einzelnen erforderlich, weil in der Regel aus Wirtschaftlichkeitsgründen nicht alle behoben werden können. Als wichtige Bewertungsparameter werden genannt: Erkennbarkeit für Dritte, Reproduzierbarkeit, Ausnutzbarkeit, benötigte Zugriffsrechte und generierbarer Schaden. Im Internet werden etwa 300 Tools angeboten. Die Qualität eines Fuzzers lässt sich jedoch nicht pauschal angeben. Die Wirksamkeit und Eignung eines Fuzzers hängen von der Zielsoftware und den individuellen Anforderungen des Testers ab.
Novel Automated Three-Dimensional Genome Scanning Based on the Nuclear Architecture of Telomeres
(2011)
Despite perfect functioning of its internal components, a robot can be unsuccessful in performing its tasks because of unforeseen situations. These situations occur when the behavior of the objects in the robot’s environment deviates from its expected values. For robots, such deviations are exhibited in the form of unknown external faults which prohibit them from performing their tasks successfully. In this work we propose to use naive physics knowledge to reason about such faults in the robotics domain. We propose an approach that uses naive physics concepts to find information about the situations which result in a detected unknown fault. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties for reasoning about the detected fault. By interpreting the reasoning results the robot finds the information about the situations which can cause the fault. We apply the proposed approach to the scenarios in which a robot performs manipulation tasks of picking and placing objects. Results of this application show that naive physics holds great promise for reasoning about unknown ex- ternal faults in robotics.
Die Fachhochschulen haben sich als Hochschulen für angewandte Wissenschaften seit ihrer Gründung Anfang der 70er Jahre deutlich gewandelt. Das Fächerportfolio vieler Fachhochschulen ist inzwischen mit jenem der Universitäten vergleichbar. In einigen Fächern bilden die Fachhochschulen sogar den überwiegenden Anteil von Absolventen aus. Die anwendungsorientierte Spitzenforschung gehört zum Selbstverständnis vieler Fachhochschulen. Vor diesem Hintergrund ist es unverständlich und für die wirtschaftliche Zukunftsfähigkeit schädlich, dass Fachhochschulen immer noch deutliche Wettbewerbsnachteile in der Weiterqualifizierung des wissenschaftlichen Nachwuchses haben. Dies gilt umso mehr, wenn mit Fachhochschulen vergleichbaren privaten Hochschulen das Promotionsrecht zugestanden wird.
Threat Modeling ermöglicht als heuristisches Verfahren die methodische Überprüfung eines Systementwurfs oder einer Softwarearchitektur, um Sicherheitslücken kostengünstig und frühzeitig - idealerweise in der Design Phase - im Software-Entwicklungsprozess zu identifizieren, einzugrenzen und zu beheben. Threat Modeling lässt sich aber auch noch erfolgreich in der Verifikationsphase oder noch später - nach dem Release - zur Auditierung der Software einsetzen. Durch die Früherkennung von Sicherheitslücken können die Kosten zur Behebung bis auf ein Hundertstel reduziert werden. Die auf dem Markt verfügbaren Threat Modeling Tools werden identifiziert, analysiert und hinsichtlich Ihrer Eignung zur Erstellung komplexer, vollständiger Threat Models mit entwickelten Bewertungsparametern einem einfachen Bewertungsverfahren unterworfen.
In Mixed Reality (MR) Environments, the user's view is augmented with virtual, artificial objects. To visualize virtual objects, the position and orientation of the user's view or the camera is needed. Tracking of the user's viewpoint is an essential area in MR applications, especially for interaction and navigation. In present systems, the initialization is often complex. For this reason, we introduce a new method for fast initialization of markerless object tracking. This method is based on Speed Up Robust Features and paradoxically on a traditional marker-based library. Most markerless tracking algorithms can be divided into two parts: an offline and an online stage. The focus of this paper is optimization of the offline stage, which is often time-consuming.
Reversible logic synthesis is an emerging research topic with different application areas like low-power CMOS design, quantum- and optical computing. The key motivation behind reversible logic synthesis is the optimization of the heat dissipation problem current architectures show, by reducing it to theoretically zero [2].
This contribution presents an easy to implement 3D tracking approach that works with a single standard webcam. We describe the algorithm and show that it is well suited for being used as an intuitive interaction method in 3D video games. The algorithm can detect and distinguish multiple objects in real-time and obtain their orientation and position relative to the camera. The trackable objects are equipped with planar patterns of five visual markers. By tracking (stereo) glasses worn by the user and adjusting the in-game camera's viewing frustum accordingly, the well-known immersive "screen as a window" effect can be achieved, even without the use of any special tracking equipment.
The perceived distance of self motion induced in a stationary observer by optic flow is overestimated (Redlick et al., Vis Res. 2001 41: 213). Here we assessed how different components of translational optic flow contribute to perceived distance traveled. Subjects sat on a stationary bicycle in front of a virtual reality display that extended beyond 90deg on each side. They monocularly viewed a target presented in a virtual hallway wallpapered with stripes that changed colour to prevent tracking individual stripes. Subjects then looked centrally or 30, 60 or 90° eccentrically while their view was restricted to an ellipse with faded edges (25 x 42deg) centered on their fixation. Subjects judged when they had reached the target’s remembered position. Perceptual gain (perceived/actual distance traveled) was highest when subjects were looking in a direction that depended on the simulated speed of motion. Results were modeled as the sum of separate mechanisms sensitive to radial and laminar optic flow. In our display distances were perceived as compressed. However, there was no correlation between perceptual compression and perceived speed of motion. These results suggest that visually induced self motion in virtual displays can be subject to large but predictable error.
Zentrale Archivierung und verteilte Kommunikation digitaler Bilddaten in der Pneumokoniosevorsorge
(2010)
Pneumokoniose-Vorsorgeuntersuchungen erfordern die Befundung einer Röntgenthoraxaufnahme nach ILO-Staublungenklassifikation. Inzwischen werden die benötigten Aufnahmen bereits in großem Umfang digital angefertigt und kommuniziert. Hierdurch entstehen neue Anforderungen an verwendete Technik und Workflow-Mechanismen, um einen effizienten Ablauf von Untersuchung, Befundung und Dokumentation zu gewährleisten.
Simultaneous detection of cyanide and heavy metals for environmental analysis by means of µISEs
(2010)
In this contribution, we describe the activities and promotion programs installed at the Bonn-Rhein-Sieg University as an institution and at the Department of Computer Science respectively for increasing the total number of computer science students and in particular the female rate. We report about our experiences in addressing gender aspects in education and try to evaluate the outcome of our programs with respect to our equal rights for women strategy. We propose a closer look at mental self-theories enabled by E-portfolios to address also gender issues in Computer Science. Moreover, reasons are identified and discussed which may be responsible for the reduced interest in particular of female young adults to choose a computer science study program.
In this paper, we describe an approach to academic teaching in computer science using storytelling as a means to investigate to hypermedia and virtual reality topics. Indications are shown that narrative activity within the context of a Hypermedia Novel related to educational content can enhance motivation for self-conducted learning and in parallel lead to an edutainment system of its own. In contrast to existing approaches the Hypermedia Novel environment allows an iterative approach to the narrative content, thereby integrating story authoring and story reception not only in the beginning but at any time. The narrative practice and background research as well as the resulting product can supplement lecture material with comparable success to traditional academic teaching approaches. On top of this there is the added value of soft skill training and a gain of expert knowledge in areas of personal background research.
This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon [5] where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points.
This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg1 . The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material.
An electronic display often has to present information from several sources. This contribution reports about an approach, in which programmable logic (FPGA) synchronises and combines several graphics inputs. The application area is computer graphics, especially rendering of large 3D models, which is a computing intensive task. Therefore, complex scenes are generated on parallel systems and merged to give the requested output image. So far, the transportation of intermediate results is often done by a local area network. However, as this can be a limiting factor, the new approach removes this bottleneck and combines the graphic signals with an FPGA.
This paper describes FGPA-based image combining for parallel graphics systems. The goal of our current work is to reduce network traffic and latency for increasing performance in parallel visualization systems. Initial data distribution is based on a common ethernet network whereas image combining and returning differs to traditional parallel rendering methods. Calculated sub-images are grabbed directly from the DVI-Ports for fast image compositing by a FPGA-based combiner.
This Paper presents the methodical approach and early findings of the project SEN-TAF (Technology Acceptance by the Elderly to Increase Independence). The project aims to examine the acceptance of robotic systems by elderly people and make early recommendations of necessary features those systems should contain. Based on theoretical approaches of technology acceptance and an empirical study to examine the general need of support of the elderly we developed several scenarios of robot applications. These scenarios are then visualized in animations and simulations to check the preliminary defined acceptance model. Beside these scenarios we survey several other factors which might have an impact of the overall acceptance, e.g. the appearance of the robotic systems (humanoid vs. technical appearance) and the interaction 'mode' (speaking vs. nonspeaking). In addition to these animations and simulations we survey the acceptance of the robotic dog AIBO as early placeholder for future developments in animal robotic systems which could serve as a resource against boredom.
Die Normen DIN EN 61508 und DIN EN 62304 beschreiben Sicherheitsanforderungen für die Entwicklung von Software im medizinischen Umfeld. Diese beinhalten u.a. Vorschriften zur Verifikation und Diagnose (Kapitel C5, DIN EN 61508-7), zur Beurteilung der funktionalen Sicherheit (Kapitel C6, DIN EN 61508-7), zur Implementierung und Verifikation von Software-Einheiten (Kapitel 5.5, DIN EN 62304) und zur Prüfung des Softwaresystems (Kapitel 5.7, DIN EN 62304). Durch die kosteneffektiven Verfahren Threat Modeling und Fuzzing wird diesen Forderungen entsprochen und insbesondere die Identifizierung unveröffentlichter Sicherheitslücken ermöglicht. In einem Forschungsprojekt1 werden Tools für beide Verfahren analysiert und bewertet. Im Projekt werden mit beiden Verfahren sehr erfolgreich bislang nicht identifizierte (unveröffentlichte) Sicherheitslücken in Individual- und Standardsoftware identifiziert und auch behoben. Im Rahmen der Gesundheitstelematik können durch beide Verfahren die Anforderungen zur Softwareentwicklung und -Verifizierung erfüllt und darüber hinaus kann ein weit höheres Sicherheitsniveau erreicht werden.
Die Tool-gestützte Identifizierung von Sicherheitslücken kann in verschiedenen Stadien des Software-Entwicklungsprozesses und -Lebenszyklus durchgeführt werden. Mit Fuzzing und Threat-Modeling stehen beispielsweise zwei Methoden zur Verfügung, welche zum Finden von Sicherheitsproblemen auch in produktiv betriebenen Anwendungen eingesetzt werden können. Beim Threat-Modeling handelt es sich um ein heuristisches Verfahren, das die methodische Entwicklung eines vertrauenswürdigen Systementwurfs unterstützt. Mit der Fuzzing-Methode lässt sich die Robustheit der untersuchten Software sowohl mit willkürlichen als auch mit zielgerichteten Daten überprüfen. Im Beitrag werden die Vorgehensweisen beim Einsatz der beiden Methoden skizziert sowie Hinweise auf entsprechende Tools gegeben.