Fachbereich Informatik
Refine
H-BRS Bibliography
- yes (62)
Departments, institutes and facilities
Document Type
- Master's Thesis (37)
- Bachelor Thesis (19)
- Report (4)
- Diploma Thesis (1)
- Study Thesis (1)
Year of publication
Keywords
- Emergency support system (2)
- Mobile sensors (2)
- Robotik (2)
- chemoCR (2)
- 0-1-Integer-Problem (1)
- 3D-Laserscanner (1)
- 3D-Punktwolke (1)
- 3D-Scanner (1)
- ASAG (1)
- Active Learning (1)
- Alize (1)
- Augmented Reality (1)
- Automation (1)
- Batch Normalization (1)
- Bildverarbeitung (1)
- Bounding box explanations (1)
- Chatbot (1)
- Classification explanations (1)
- Computer Game (1)
- Computer Vision (1)
- Conversational Search (1)
- Correlation (1)
- Datenbank (1)
- Directed Acyclic Graph (1)
- Distributed Systems (1)
- Domänen spezifische Sprache (1)
- Echtzeit-Tracking (1)
- Electronic Data Capture (EDC) (1)
- Expertensystem (1)
- Explainable Artificial Intelligence (XAI) (1)
- Flussnetz (1)
- Gnu Linear Programming Kit (1)
- Gradient-based explanation methods (1)
- Graphentheorie (1)
- Hibernate (1)
- Human Muscle (1)
- ICP (1)
- Information Retrieval (1)
- Interactive visualization (1)
- JBoss Drools (1)
- Java (1)
- Klassische Suchverfahren (1)
- KnowledgeFinder (1)
- Kollaboration (1)
- Kombinatorische Optimierung (1)
- Konfiguration (1)
- LAMA (1)
- LDA (1)
- LP-Heuristik (1)
- Labordaten (1)
- Lagerlogistik (1)
- LibAMA (1)
- Lineare Programmierung (1)
- Löser (1)
- Markush (1)
- Maximalflussproblem (1)
- Minimaler Schnitt (1)
- Motivation System (1)
- Multi-object visualization (1)
- NP-Vollständigkeit (1)
- Nachbarschaftsanalyse (1)
- OGC sensor observation service (1)
- OGS sensor observation service (1)
- OSGi (1)
- Object detectors (1)
- Operation Research (1)
- Optimierungsproblem (1)
- PLDA (1)
- Path-Packing (1)
- Quantitative analysis of explanations (1)
- Query method (1)
- RANSAC (1)
- RCE (1)
- REDCap (1)
- Random forest (1)
- Robotics (1)
- Rubrics (1)
- Rucksackproblem (1)
- SELU (1)
- SLAM (1)
- Saliency maps (1)
- Sandbox (1)
- Sanity checks for explaining detectors (1)
- Segmentierung (1)
- Semantische Suche (1)
- Semantische Technologien (1)
- Sensor web enablement (1)
- Short answer grading (1)
- Software testing (1)
- Speaker identification (1)
- Split-Screen (1)
- UAV (1)
- Virtuelle Realität (1)
- Wissenrepräsentation (1)
- YOLO v3 (1)
- bearing angle (1)
- binary classification (1)
- context free grammar (1)
- deep learning (1)
- domain specific language (1)
- extSMILES (1)
- external faults (1)
- i-vectors (1)
- laser scanner (1)
- lineares Gleichungssystem (1)
- mobile manipulators (1)
- object detection (1)
- optical flow (1)
- patent search (1)
- quadrotor (1)
- structure reconstruction (1)
- Öffentliche Verwaltung (1)
Distributed computing environments allow collaborative problem solving across teams and organisations. A fundamental precondition for collaboration is the ability to find available participants and be able to exchange information. One way to approach this conceptual formulation are central directories or registry services. A major disadvantage of centralized components is, that they limit the flexibility to form ad hoc networks that are targeted to solve a specific problem. To facilitate flexible and dynamic collaborations, ideas from decentralized and self-organising networks can be combined with concepts of service oriented computing. This project aims to investigate potential solutions for dynamic discovery of network participants and outlines how to manage challenges associated with the development of a discovery protocol for distributed systems. During the course of this project a prototypical implementation was created that integrates into the open source distributed, collaborative problem solving environment RCE [9]. It is currently developed at the German Aerospace Center (DLR) but is planned to make the framework available to broader community.
In der Arbeit wurde ein Steuerungsframework für die LAMA-Bibliothek (http://www.libama.org) zur Konfiguration von Lösern linearer Gleichungssysteme entwickelt. Hierzu wurde ein Parser mit der Boost.Spirit-Biblithek realisiert, der die Laufzeitinterpretation einer domänenspezifische Sprache (DSL) erlaubt. Durch die Konfigurationssprache ist es möglich, Löser ohne Einschränkungen über ihre ID zu verknüpfen, diesen Lösern Logger und logisch verknüpfte Haltekriterien zuzuordnen.
Die Matrix-Vektor-Multiplikation für dünn besetzte Matrizen (SpMV) stellt für weitreichende wissenschaftliche Anwendungen eine der Kernoperationen des High-Performance-Computing-Bereichs dar. Für die verteilte Berechnung mit immer beliebter werdenden hybriden Rechenclustern kommt dabei die Frage nach einer geeigneten Partitionierungsstrategie für die Verteilung von Daten und Berechnung auf. Diese Arbeit beschäftigt sich damit welchen Einfluss die Struktur der Matrix und die unterschiedlichen Prozessortypen auf die Leistung der SpMV haben und schlägt ein Modell vor, um für diese eine lastbalancierte Verteilung zu erreichen. Wesentliche Bestandteile sind dabei die Laufzeitvorhersage für aktuelle CPUs und GPUs basierend auf einem abgewandelten Roofline-Modell sowie die bewährte Methode der Graph-Partitionierung.
Augmented Reality (AR) findet heutzutage sehr viele Anwendungsbereiche. Durch die Überlagerung von virtuellen Informationen mit der realen Umgebung eignet sich diese Technologie besonders für die Unterstützung der Benutzer bei technischen Wartungs- oder Reparaturvorgängen. Damit die virtuellen Daten korrekt mit der realen Welt überlagert werden, müssen Position und Orientierung der Kamera durch ein Trackingverfahren ermittelt werden. In dieser Arbeit wurde für diesen Zweck ein markerloses, modellbasiertes Trackingsystem implementiert. Während einer Initialisierungs-Phase wird die Kamerapose mithilfe von kalibrierten Referenzbildern, sogenannten Keyframes, bestimmt. In einer darauffolgenden Tracking-Phase wird das zu trackende Objekt weiterverfolgt. Evaluiert wurde das System an dem 1:1 Trainingsmodell des biologischen Forschungslabors Biolab, welches von der Europäischen Weltraumorganisation ESA zur Verfügung gestellt wurde.
The objective of this thesis is to implement a computer game based motivation system for maximal strength testing on the Biodex System 3 Isokinetic Dynamometer. The prototype game has been designed to improve the peak torque produced in an isometric knee extensor strength test. An extensive analysis is performed on a torque data set from a previous study. The torque responses for five second long maximal voluntary contractions of the knee extensor are analyzed to understand torque response characteristics of different subjects. The parameters identifed in the data analysis are used in the implementation of the 'Shark and School of Fish' game. The behavior of the game for different torque responses is analyzed on a different torque data set from the previous study. The evaluation shows that the game rewards and motivates continuously over a repetition to reach the peak torque value. The evaluation also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increase the torque to reach peak torque.
In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories.
Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
Segmentierung von 3D-Daten
(2011)
Die vorliegende Arbeit wird im Rahmen eines Projektes des Fraunhofer Instituts IAIS erstellt. Hier geht es um die Entwicklung eines neuen 3D-Laserscanners. Basierend auf diesem 3D-Laserscanner soll eine Sicherheits-Anwendung realisiert werden. Für eine Softwarekomponente - die Segmentierung von 3D-Daten - wird der Stand der Forschung untersucht und es werden drei Segmentierungs-Verfahren ausgewählt und implementiert. Der RANSAC-Algorithmus wird zur Detektion von Ebenen eingesetzt. In dieser Arbeit wird er um ein Abbruchkriterium erweitert, welches die Gesamtlaufzeit bei der Segmentierung von mehreren Ebenen verringert.
This report presents an approach on a quadrotor dynamics stabilization based on ICP SLAM. Because the quadrotor lacks sensory information to detect its horizontal drift an additional sensor as Hokuyo-UTM has been used to perform on-line ICP-based SLAM. The obtained position estimates were used in control loops to maintain desired position and orientation of the vehicle. Such attitude parameters as height, yaw and position in space were controlled based on the laser data. As a result the quadrotor demonstrated two significant for autonomous navigation capabilities: performance of on-line SLAMon a flying vehicle and maintaining desired position in 3D space. Visual approach on optical flow based on Pyramid Lucas-Kanade algorithm has been touched and tested in different environmental conditions though hasn't been implemented in the control loop. Also the performance of the Hokuyo laser scanner and the related to it ICP SLAM algorithm have been tested in different environmental conditions indoors, outdoors and in presence of smoke. Results are presented and discussed. The requirement of performing on-line SLAM algorithm and to carry quite heavy equipment for it forced to seek a solution to increase the payload of the quadrotor with its computational power. A new hardware and distributed software architectures are therefore presented in the report.
In the eld of accessing and visualization mobile sensors and their recorded data, di erent approaches were realized. The OGC1 Sensor observation Service supplies a standard to access these information, stored on servers. To be able to access these servers, an interface must be developed and implemented. The result should be a con gurable development framework for web-based GIS clients supporting the OGC sensor observation services. In particular the framework should allow continuous position updates of mobile sensors. Visualization features like charts, bounding boxes of sensors and data series should be included.
The task of this thesis is to develop an OGC-compliant Sensor Observation Service (SOS) { a component of the SWE { for GPS related sensor data in this context. It should, in contrast to existing implementations, support full mobility of the sensors and be con gurable with respect to adding di erent kinds of sensors. In particular, mobile phones should be considered as sensors, which transmit their data to the SOS server through the transactional SOS interface.
This master thesis describes a supervised approach to the detection and the identification of humans in TV-style video sequences. In still images and video sequences, humans appear in different poses and views, fully visible and partly occluded, with varying distances to the camera, at different places, under different illumination conditions, etc. This diversity in appearance makes the task of human detection and identification to a particularly challenging problem. A possible solution of this problem is interesting for a wide range of applications such as video surveillance and content-based image and video processing. In order to detect humans in views ranging from full to close-up view and in the presence of clutter and occlusion, they are modeled by an assembly of several upper body parts. For each body part, a detector is trained based on a Support Vector Machine and on densely sampled, SIFT-like feature points in a detection window. For a more robust human detection, localized body parts are assembled using a learned model for geometric relations based on Gaussians. For a flexible human identification, the outward appearance of humans is captured and learned using the Bag-of-Features approach and non-linear Support Vector Machines. Probabilistic votes for each body part are combined to improve classification results. The combined votes yield an identification accuracy of about 80% in our experiments on episodes of the TV series "Buffy the Vampire Slayer". The Bag-of-Features approach has been used in previous work mainly for object classification tasks. Our results show that this approach can also be applied to the identification of humans in video sequences. Despite the difficulty of the given problem, the overall results are good and encourage future work in this direction.
The recent explosion of available audio-visual media is the new challenge for information retrieval research. Audio speech recognition systems translate spoken content to the text domain. There is a need for searching and indexing this data which possesses no logical structure. One possible way to structure it on a high level of abstraction is by finding topic boundaries. Two unsupervised topic segmentation methods were evaluated with real-world data in the course of this work. The first one, TSF, models topic shifts as fluctuations in the similarity function of the transcript. The second one, LCSeg, approaches topic changes as places with the least overlapping lexical chains. Only LCSeg performed close to a similar real-world corpus. Other reported results could not be outperformed. Topic analysis based on the repeated word usage models renders topic changes more ambiguous than expected. This issue has more impact on the segmentation quality than the state-of-the-art ASR word error rate. It could be concluded that it is advisable to develop topic segmentation algorithms with real-world data to avoid potential biases to artificial data. Unlike evaluated approaches based on word usage analysis, methods operating with local contexts can be expected to perform better through emulation of semantic dependencies.
Robots integrated into a social environment with humans need the ability to locate persons in their surrounding area. This is also the case for the WelcomeBot which is developed at the Fraunhofer Institute IAIS. In the future, the robot should follow persons in the buildings and guide them to certain areas. Therefore, it needs the capability to detect and track a person in the environment. In this master thesis, an approach for fast and reliable tracking of a person via a mobile robotic platform is presented. Based on the investigation of different methods and sensors, a laser scanner and a camera are selected as the primary two sensors.
In dieser Arbeit wird eine von P. Ahlrichs und B. Dünweg entwickelte Methode [Ahlrichs und Dünweg, 1998] zur Simulation von Polymeren in Flüssigkeiten auf dem Cell-Prozessor implementiert. Dabei soll der Frage nachgegangen werden, wie performant der Cell-Processor in der Lage ist diese Simulation zu berechnen.
Zur Simulation der Polymere wird eine Molekular-Dynamik Simulation genutzt. Die Monomere der Polymerketten werden durch ein Kugel-Feder Modell gekoppelt. Die einzelnen Monomere der Polymere werden als einfache Punktteilchen betrachtet. Dies ermöglicht eine Interaktion der Monomere, unabhängig von deren Zeit- und Längenskalen, mit der Flüssigkeitssimulation durch Reibung. Die Flüssigkeit wird in dieser Arbeit durch die Lattice-Boltzmann-Methode simuliert.
Nowadays perception is still an up-to-date scienti fic issue on a mobile robot system. This thesis introduces an approach on how to recognize objects, namely numbers, using a digital camera on a Volksbot robot. The robot used in this thesis has been specifi cally designed for the SICK robot day. The development of the vision algorithm was done in two stages: the region of interest detection and the actual number recognition. Diff erent algorithms had been tested and evaluated and the Canny edge detector with contour finding has been proven to be the best choice for the region of interest detection and the Tesseract OCR engine was the best decision for number recognition. To integrate the vision component on an existing robot system, ROS was used. This thesis also discusses the integration of the EPOS motor controller into ROS.
Das WebDAV-Protokoll (Web-based Distributed Authoring and Versioning) ermöglicht die Bearbeitung und Verwaltung von Dateien auf einem Web-Server. Aus technischer Sicht ist WebDAV eine Erweiterung des HTTP-Protokolls. Durch die rasche Zunahme und den steigenden Verbreitungsgrad von WebDAV-basierten Anwendungen, wie etwa Dokumentenmanagementsystemen, steigen auch die Anforderungen an deren Zuverlässigkeit. Die voll umfassende Unterstützung von Transaktionen, d.h. die Zusammenfassung einer Menge von Verarbeitungsschritten zu einer logischen Einheit, würde hierzu einen wichtigen Beitrag leisten. Die für Transaktionen geforderten Eigenschaften, die gleichzeitig auch deren Hauptvorteile darstellen, werden durch das bekannte Akronym ACID beschrieben, welches für Atomarität (atomicity), Konsistenz (consistency), Isoliertheit (isolation) und Dauerhaftigkeit (durability) steht. Zurzeit unterstützt das WebDAV-Protokoll allerdings nur die Punkte Konsistenz und Dauerhaftigkeit, eine komplette und vor allem standardkonforme Unterstützung der ACID-Eigenschaften von Transaktionen ist nicht gegeben. Im Rahmen dieser Arbeit wurde nun ein Transaktionsmodell für den WebDAVStandard erarbeitet. Das Modell ermöglicht es, eine Menge von Dateioperationen transaktionsbasiert durchzuführen. Das Modell unterstützt dabei zur Sicherstellung der Serialisierbarkeit sowohl optimistische als auch pessimistische Verfahren. Die Unterstützung des optimistischen Verfahrens wurde dabei von der IETF (Internet Engineering Task Force) als zulässiges und sinnvolles Vorgehen zur Realisierung von Transaktionen mittels WebDAV bestätigt. Für die pessimistischen Verfahren wurde im Rahmen dieser Arbeit aufgezeigt, wie die bestehenden Konzepte des WebDAV-Standards erweitert werden müssen, um dies ebenfalls umsetzen zu können. Um die getroffene Entwurfsentscheidung zu verifizieren, wurde eine prototypische Implementierung des Modells vorgenommen. Hierbei wurde, nach einer entsprechenden Evaluierung und Bewertung, die optimistische Nebenläufigkeitskontrolle umgesetzt. Clientseitig setzt die Implementierung auf der Jackrabit-Library auf, die serverseitige Implementierung verwendet als Grundlage den WebDAV-Server von Subversion.
Diese Arbeit soll sich mit dem Erstellen von High Dynamic Range Images beschäftigen und damit, es den Fotografen ein wenig leichter zu machen. Ein Algorithmus zum Entfernen von Bildartefakten wird ausgewählt und parallel unter Nutzung der NVIDIA CUDA API implementiert. Der dadurch erzielte Geschwindigkeitszuwachs macht dieses Verfahren tauglich für den Einsatz in Bildbearbeitungsprogrammen.
Today publications are digitally available which enables researchers to search the text and often also the content of tables. On the contrary, images cannot be searched which is not a problem for most fields, but in chemistry most of the information are contained in images, especially structure diagrams. Next to the "normal" chemical structures, which represent exactly one molecule, there also exist generic structures, so called Markush structures. These contain variable parts and additional textual information which enable them to represent several molecules at once. This can vary between just a few and up to thousands or even millions. This ability lead to a spread of Markush structures in patents, because it enables patents to protect entire families of molecules at once. Next to the prevention of an enumeration of all structures it also has the advantage that, if a Markush structure is used in a patent, it is much harder to determine whether a specific structure is protected by it or not. To solve the question about the protection of a structure, it is necessary to search the patents. Appropriate databases for this task already do exist, but are filled manually. An automatic processing does not yet exist. In this project a Markush structure reconstruction prototype is developed which is able to reconstruct bitmaps including Markush structures (meaning a depiction of the structure and a text part describing the generic parts) into a digital format and save them in the newly developed context-free grammar based file format extSMILES. This format is searchable due to its context-free grammar based design. To be able to develop a Markush structure reconstruction prototype, an in depth analysis of the concept of Markush structures and their requirements for a reconstruction process was performed. Thereby it is stated, that the common connection table concept of the existing file formats is not able to store Markush structures. Especially challenging are conditions for most of the formats. Thus, a context-free grammar based file format is developed, which extends the SMILES format. This extSMILES called format assures the searchability of the results by its context-free grammar based concept, and is able to store all information contained in Markush structures. In addition it is generic, extendable and easily understandable. The developed prototype for the Markush structure reconstruction uses extSMILES as output format and is based on the chemical structure recognition tool chemoCR and the Unstructured Information Management Architecture UIMA. For chemoCR modules are developed which enable it to recognize and assemble Markush structures as well as to return the reconstruction result in extSMILES. For UIMA on the other hand, a pipeline is developed, which is able to analyse and translate the input text files to extSMILES. The results of both tools then are combined and presented in chemoCR. An evaluation of the prototype is performed on a representative set of twelve structures of interest and low image quality which contain all typical Markush elements. Trivial structures containing only one R-group are not evaluated. Due to the challenging nature of the images, no Markush structure could be correctly reconstructed. But by regarding the assumption, that R-group definitions which are described by natural language are excluded from the task, and under the condition that the core structure reconstruction is improved, the rate of success can be increased to 58.4%.
Ein wichtiges Themengebiet der Forschung ist die Beschleunigung von Berechnungen mittels Parallelisierung von Algorithmen. Grafikprozessoren, die für den General Purpose Computation on GPU’s (GPGPU) Betrieb geeignet sind, bieten eine aktuelle Möglichkeit der Parallelisierung. Anhand dieser Grafikkarten ist es möglich, die hohe Leistung der Grafikprozessoren zur Berechnung wissenschaftlicher Aufgabenstellungen zu nutzen. In dieser Arbeit wird ein Algorithmus zur Ausrichtung von Belichtungsreihen, die bei der High-Dynamic-Range (HDR) Fotografie genutzt werden, ausgewählt und auf der Grafikkartenarchitektur von NVIDIA parallelisiert.
Heutzutage ist die Entwicklung von Luft- und Raumfahrzeugen ein komplexer und standardisierter Prozess, der verschiedene Disziplinen der Wissenschaft und des Ingenieurwesens vereint. Die Kenntnis flugphysikalischer Eigenschaften, insbesondere Aerodynamik und Strömung, ist für den Entwurf von Luft- und Raumfahrzeugen unerlässlich. Um den Aufwand zur Berechnung dieser Eigenschaften zu verringern, wurden Methoden und Werkzeuge zur computergestützten Simulation entworfen. Diese werden in integrierten simulationsbasierten Entwicklungsprozessen zusammengefasst. Dadurch ist es beispielsweise möglich, Zeitersparnisse von bis zu mehreren Jahren, gegenüber physikalischen Tests in Windkanälen, zu erzielen [Bec08].
Objektrelationale Datenbanken und Rough Sets für die Analyse von Contextualized Attention Metadata
(2009)
In dieser Arbeit wurden zwei verschiedene Aspekte zum gemeinsamen Arbeiten in gemeinsam genutzten virtuellen Umgebungen behandelt. Zum einen wurden verschiedene Verfahren vorgestellt, die eine gleichzeitige Betrachtung zweier unterschiedlicher Ansichten auf einer Projektionsfläche ermöglichen (Switchen, Picture in Picture und Splitscreen). Der Schwerpunkt bei diesem Teil der Arbeit lag bei dem Splitscreen, da er zwei gleichwertige verzerrungsfreie Bilder beider Ansichten erzeugt. Um die korrekte Perspektive zu erhalten, wird der Sichtkegel der Betrachter vertikal in der Mitte geteilt. Dadurch kann ein betrachtetes Objekt am Bildrand abgeschnitten werden, weshalb die Kamera der Betrachter neu auf dieses Objekt ausgerichtet werden muss. Hierdurch können unterschiedliche Transformationen für beide Anwender erfolgen, wodurch das kollaborative Arbeiten gestört wird. Der zweite Aspekt dieser Arbeit beschäftigte sich mit einem Kollisionsproblem, welches auftreten kann, wenn mehrereBenutzer gemeinsam einen schmalen Durchgang passieren. Die Darstellung der virtuellen Umgebung erfolgt im TwoView. Hier steht den Benutzern eine frei begehbare Fläche zur Verfügung, auf der ihre realen Positionen erfasst und in die virtuelle Umgebung übertragen werden. Das Beschreiten der virtuellen Umgebung erfolgt anhand von Wegen, deren Ablaufgeschwindigkeit von einem Benutzer gesteuert werden kann. Stehen die Personen zu weit auseinander, um durch einen Durchgang zu passen, muss mindestens einer durch eine Wand laufen. Um dieses Problem zu beheben, wurde eine Pfadkorrektur implementiert, die entweder die Betrachter auf einem sicheren Weg durch diesen Durchgang leitet oder den begangenen Weg anhält. Da sowohl bei der Darstellung zweier Ansichten als auch bei der Pfadkorrektur der gemeinsame Raum beeinträchtigt werden kann, wurde zuletzt ein empirischer Test zur Bewertung dieses Effekts durchgeführt.