Fachbereich Informatik
Refine
H-BRS Bibliography
- yes (57)
Departments, institutes and facilities
Document Type
- Master's Thesis (33)
- Bachelor Thesis (19)
- Report (3)
- Diploma Thesis (1)
- Study Thesis (1)
Year of publication
Keywords
- Emergency support system (2)
- Mobile sensors (2)
- chemoCR (2)
- 0-1-Integer-Problem (1)
- 3D-Laserscanner (1)
- 3D-Punktwolke (1)
- 3D-Scanner (1)
- Alize (1)
- Augmented Reality (1)
- Automation (1)
In the field of autonomous robotics, sensors have played a major role in defining the scope of technology and to a great extent, limitations of it as well. This cycle of constant updates and hence technological advancement has made given birth to some serious industries which were once inconceivable. Industries like autonomous driving which has a serious impact on safety and security of people, also has an equally harsh implication on the dynamics and economics of the market. With sensors like LiDAR and RADAR delivering 3D measurements as point clouds, there is a necessity to process the raw measurements directly and many research groups are working on the same. A sizable research has gone in solving the task of object detection on 2D images. In this thesis we aim to develop a LiDAR based 3D object detection scheme. We combine the ideas of PointPillars and feature pyramid networks from 2D vision to propose Pillar-FPN. The proposed method directly takes 3D point clouds as input and outputs a 3D bounding box. Our pipeline consists of multiple variations of proposed Pillar-FPN at the feature fusion level that are described in the results section. We have trained our model on the KITTI train dataset and evaluated it on KITTI validation dataset.
This project focuses on object detection in dense volume data. There are several types of dense volume data, namely Computed Tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI). This work focuses on CT scans. CT scans are not limited to the medical domain; they are also used in industries. CT scans are used in airport baggage screening, assembly lines, and the object detection systems in these places should be able to detect objects fast. One of the ways to address the issue of computational complexity and make the object detection systems fast is to use low-resolution images. Low-resolution CT scanning is fast. The entire process of scanning and detection can be made faster by using low-resolution images. Even in the medical domain, to reduce the rad iation dose, the exposure time of the patient should be reduced. The exposure time of patients could be reduced by allowing low-resolution CT scans. Hence it is essential to find out which object detection model has better accuracy as well as speed at low-resolution CT scans. However, the existing approaches did not provide details about how the model would perform when the resolution of CT scans is varied. Hence in this project, the goal is to analyze the impact of varying resolution of CT scans on both the speed and accuracy of the model. Three object detection models, namely RetinaNet, YOLOv3, and YOLOv5, were trained at various resolutions. Among the three models, it was found that YOLOv5 has the best mAP and f1 score at multiple resolutions on the DeepLesion dataset. RetinaNet model h as the least inference time on the DeepLesion dataset. From the experiments, it could be asserted that sacrificing mean average precision (mAP) to improve inference time by reducing resolution is feasible.
As cameras are ubiquitous in autonomous systems, object detection is a crucial task. Object detectors are widely used in applications such as autonomous driving, healthcare, and robotics. Given an image, an object detector outputs both the bounding box coordinates as well as classification probabilities for each object detected. The state-of-the-art detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications in particular. It is therefore crucial to explain the reason behind each detector decision in order to gain user trust, enhance detector performance, and analyze their failure.
Previous work fails to explain as well as evaluate both bounding box and classification decisions individually for various detectors. Moreover, no tools explain each detector decision, evaluate the explanations, and also identify the reasons for detector failures. This restricts the flexibility to analyze detectors. The main contribution presented here is an open-source Detector Explanation Toolkit (DExT). It is used to explain the detector decisions, evaluate the explanations, and analyze detector errors. The detector decisions are explained visually by highlighting the image pixels that most influence a particular decision. The toolkit implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. To the author’s knowledge, this is the first work to conduct extensive qualitative and novel quantitative evaluations of different explanation methods across various detectors. The qualitative evaluation incorporates a visual analysis of the explanations carried out by the author as well as a human-centric evaluation. The human-centric evaluation includes a user study to understand user trust in the explanations generated across various explanation methods for different detectors. Four multi-object visualization methods are provided to merge the explanations of multiple objects detected in an image as well as the corresponding detector outputs in a single image. Finally, DExT implements the procedure to analyze detector failures using the formulated approach.
The visual analysis illustrates that the ability to explain a model is more dependent on the model itself than the actual ability of the explanation method. In addition, the explanations are affected by the object explained, the decision explained, detector architecture, training data labels, and model parameters. The results of the quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. In addition, a single explanation method cannot generate more faithful explanations than other methods for both the bounding box and the classification decision across different detectors. Both the quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. Finally, a convex polygon-based multi-object visualization method provides more human-understandable visualization than other methods.
The author expects that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.
High-dimensional and multi-variate data from dynamical systems such as turbulent flows and wind turbines can be analyzed with deep learning due to its capacity to learn representations in lower-dimensional manifolds. Two challenges of interest arise from data generated from these systems, namely, how to anticipate wind turbine failures and how to better understand air flow through car ventilation systems. There are deep neural network architectures that can project data into a lower-dimensional space with the goal of identifying and understanding patterns that are not distinguishable in the original dimensional space. Learning data representations in lower dimensions via non-linear mappings allows one to perform data compression, data clustering (for anomaly detection), data reconstruction and synthetic data generation.
In this work, we explore the potential that variational autoencoders (VAE) have to learn low-dimensional data representations in order to tackle the problems posed by the two dynamical systems mentioned above. A VAE is a neural network architecture that combines the mechanisms of the standard autoencoder and variational bayes. The goal here is to train a neural network to minimize a loss function defined by a reconstruction term together with a variational term defined as a Kulback-Leibler (KL) divergence.
The report discusses the results obtained for the two different data domains: wind turbine time series and turbulence data from computational fluid dynamics (CFD) simulations.
We report on the reconstruction, clustering and unsupervised anomaly detection of wind turbine multi-variate time series data using a variant of a VAE called Variational Recurrent Autoencoder (VRAE). We trained a VRAE to cluster normal and abnormal wind turbine series (two class problem) as well as normal and multiple abnormal series (multi-class problem). We found that the model is capable of distinguishing between normal and abnormal cases by reducing the dimensionality of the input data and projecting it to two dimensions using techniques such as Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). A set of anomaly scoring methods is applied on top of these latent vectors in order to compute unsupervised clustering. We have achieved an accuracy of up to 96% with the KM eans + + algorithm.
We also report the data reconstruction and generation results of two dimensional turbulence slices corresponding to CFD simulation of a HVAC air duct. For this, we have trained a Convolutional Variational Autoencoder (CVAE). We have found that the model is capable of reconstructing laminar flows up to a certain degree of resolution as well generating synthetic turbulence data from the learned latent distribution.
Das Deutsche Zentrum für Luft- und Raumfahrt (DLR) führt viele Forschungen und Studien im Bereich der Luft- und Raumfahrt durch. Dabei spielen die Studien für die Gesundheit und Medizin auch eine sehr wichtige Rolle bei der DLR. Zu diesem Zweck führt die DLR die Artificial Gravity bed rest study (AGBRESA) im Auftrag der European Space Agency (esa) und in Kooperation der NASA durch. In dieser Studie werden die negativen Auswirkungen der Schwerelosigkeit auf dem Menschen im Weltall simuliert. Dabei werden Experimente durchgeführt, um die negative Auswirkungen entgegenzuwirken. Die Ergebnisse der Experimente werden in der DLR digital, aber auch auf Papier dokumentiert. In diesem Master-Projekt habe ich nun die Aufgabe, die Papierprotokolle für den Bereich der Blutabnahme und der Labordokumentation in eine digitale Form zu ersetzen.
Intelligente Dialogsysteme – Chatbots – werden immer häufiger als virtuelle Ansprechpartner von Unternehmen und Institutionen eingesetzt. Auf Basis einer Wissensdatenbank können Chatbots einen größeren Anteil von Kundenanfragen automatisiert beantworten. Analog ist der Einsatz von Chatbots als digitaler Ansprechpartner öffentlicher Verwaltungen denkbar. Sie könnten Bürgern helfen, sich innerhalb der behördlichen Strukturen zu orientieren und Verwaltungsleistungen effizient und effektiv in Anspruch zu nehmen.
Diese Arbeit überprüft den Einsatz eines Chatbots in der öffentlichen Verwaltung hinsichtlich der entstehenden Kosten und des erwartbaren Nutzens. Auf Basis einer umfangreichen Literaturauswertung und der prototypischen Realisierung eines Chatbots für ein Stadtportal werden dabei Herausforderungen dieser Anwendungsdomäne herausgearbeitet, konkrete Funktionsweise und Implementierungsstrategien von Chatbots erörtert und einige Erfolgsfaktoren formuliert, die den Kern einer Handlungsempfehlung für Entscheidungsträger öffentlicher Verwaltungen bilden.
This work aims to create a natural language generation (NLG) base for further development of systems for automatic examination questions generation and automatic summarization in Hochschule Bonn-Rhein-Sieg and Fraunhofer IAIS, respectively. Nowadays both tasks are very relevant. The first can significantly simplify the university teachers' work and the second to be of assistance for a faster retrieval of knowledge from an excessively large amount of information that people often work with. We focus on the search for an efficient and robust approach to the controlled NLG problem. Therefore, though the initial idea of the project was the usage of the generative adversarial neural networks (GANs), we switched our attention to more robust and easily-controllable autoencoders. Thus, in this work we implement an autoencoder for unsupervised discovery of latent space representations of text, and show the ability of the system to generate new sentences based on this latent space. Apart from that, we apply Gaussian mixture techniques in order to obtain meaningful text clusters and thereby try to create a tool that would allow us to generate sentences relevant to the semantics of the Gaussian clusters, e.g. positive or negative reviews or examination questions on certain topic. The developed system is tested on several datasets and compared to GANs' performance.
Die letzten zwei Jahrzehnte wurden durch das exponentielle Wachstum der zur Verfügung stehenden Daten geprägt. Täglich produzieren Menschen und Maschinen mehr und mehr Daten, die oftmals in verteilten Datenspeichern abgelegt werden. Anwendungsgebiete lassen sich beispielsweise in der Physik und Astronomie finden, wo immense Datenmengen von Teilchenbeschleunigern oder Satelliten erzeugt werden, die gespeichert und verarbeitet werden müssen. Aus diesen Datenmengen können weder vom Menschen direkt noch durch traditionelle Analysemethoden neue Erkenntnisse gewonnen werden. Zur Verarbeitung dieser Datenmassen sind parallele sowie verteilte Datenanalyseverfahren notwendig. [MTT18,NEKH+18]
Neural network based object detectors are able to automatize many difficult, tedious tasks. However, they are usually slow and/or require powerful hardware. One main reason is called Batch Normalization (BN) [1], which is an important method for building these detectors. Recent studies present a potential replacement called Self-normalizing Neural Network (SNN) [2], which at its core is a special activation function named Scaled Exponential Linear Unit (SELU). This replacement seems to have most of BNs benefits while requiring less computational power. Nonetheless, it is uncertain that SELU and neural network based detectors are compatible with one another. An evaluation of SELU incorporated networks would help clarify that uncertainty. Such evaluation is performed through series of tests on different neural networks. After the evaluation, it is concluded that, while indeed faster, SELU is still not as good as BN for building complex object detector networks.
The recent explosion of available audio-visual media is the new challenge for information retrieval research. Audio speech recognition systems translate spoken content to the text domain. There is a need for searching and indexing this data which possesses no logical structure. One possible way to structure it on a high level of abstraction is by finding topic boundaries. Two unsupervised topic segmentation methods were evaluated with real-world data in the course of this work. The first one, TSF, models topic shifts as fluctuations in the similarity function of the transcript. The second one, LCSeg, approaches topic changes as places with the least overlapping lexical chains. Only LCSeg performed close to a similar real-world corpus. Other reported results could not be outperformed. Topic analysis based on the repeated word usage models renders topic changes more ambiguous than expected. This issue has more impact on the segmentation quality than the state-of-the-art ASR word error rate. It could be concluded that it is advisable to develop topic segmentation algorithms with real-world data to avoid potential biases to artificial data. Unlike evaluated approaches based on word usage analysis, methods operating with local contexts can be expected to perform better through emulation of semantic dependencies.
Chipkarten im Mobilfunk
(2002)
Estimation of Prediction Uncertainty for Semantic Scene Labeling Using Bayesian Approximation
(2018)
With the advancement in technology, autonomous and assisted driving are close to being reality. A key component of such systems is the understanding of the surrounding environment. This understanding about the environment can be attained by performing semantic labeling of the driving scenes. Existing deep learning based models have been developed over the years that outperform classical image processing algorithms for the task of semantic labeling. However, the existing models only produce semantic predictions and do not provide a measure of uncertainty about the predictions. Hence, this work focuses on developing a deep learning based semantic labeling model that can produce semantic predictions and their corresponding uncertainties. Autonomous driving needs a real-time operating model, however the Full Resolution Residual Network (FRRN) [4] architecture, which is found as the best performing architecture during literature search, is not able to satisfy this condition. Hence, a small network, similar to FRRN, has been developed and used in this work. Based on the work of [13], the developed network is then extended by adding dropout layers and the dropouts are used during testing to perform approximate Bayesian inference. The existing works on uncertainties, do not have quantitative metrics to evaluate the quality of uncertainties estimated by a model. Hence, the area under curve (AUC) of the receiver operating characteristic (ROC) curves is proposed and used as an evaluation metric in this work. Further, a comparative analysis about the influence of dropout layer position, drop probability and the number of samples, on the quality of uncertainty estimation is performed. Finally, based on the insights gained from the analysis, a model with optimal configuration of dropout is developed. It is then evaluated on the Cityscape dataset and shown to be outperforming the baseline model with an AUC-ROC of about 90%, while the latter having AUC-ROC of about 80%.
Das Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS) betreibt seit mehreren Jahren auf dem Campus Schloss Birlinghoven in Sankt Augustin angewandte Forschung in den Bereichen Multisensordatenanalyse und Datenvisualisierung.
Im Rahmen einer mehrjährigen Kooperation zwischen dem Fraunhofer-IAIS und der Wehrtechnischen Dienststelle 71 (WTD71) wurde das Seeraumüberwachungssystem iLEXX entwickelt. Es soll den Benutzer auf auffällige Situationen hinweisen und ihm kontextabhängig alle notwendigen Handlungsoptionen zur weiteren Aufklärung der Situation oder der Abwehr einer Bedrohung aufzeigen. Das iLEXX-System verarbeitet eine Vielzahl von Sensordaten und Ereignissen. Abhängig vom Szenario kommen hier mehrere tausend Updates pro Sekunde zusammen, die in Echtzeit vorverarbeitet und visualisiert werden müssen.
This report presents an approach on a quadrotor dynamics stabilization based on ICP SLAM. Because the quadrotor lacks sensory information to detect its horizontal drift an additional sensor as Hokuyo-UTM has been used to perform on-line ICP-based SLAM. The obtained position estimates were used in control loops to maintain desired position and orientation of the vehicle. Such attitude parameters as height, yaw and position in space were controlled based on the laser data. As a result the quadrotor demonstrated two significant for autonomous navigation capabilities: performance of on-line SLAMon a flying vehicle and maintaining desired position in 3D space. Visual approach on optical flow based on Pyramid Lucas-Kanade algorithm has been touched and tested in different environmental conditions though hasn't been implemented in the control loop. Also the performance of the Hokuyo laser scanner and the related to it ICP SLAM algorithm have been tested in different environmental conditions indoors, outdoors and in presence of smoke. Results are presented and discussed. The requirement of performing on-line SLAM algorithm and to carry quite heavy equipment for it forced to seek a solution to increase the payload of the quadrotor with its computational power. A new hardware and distributed software architectures are therefore presented in the report.
In order to help journalists investigate inside large audiovisual archives, as maintained by news broadcast agencies, the multimedia data must be indexed by text-based search engies. By automatically creating a transcript through automatic speech recognition (ASR), the spoken word becomes accessible to text search, and queries for keywords are made possible. But stil, important contextual information like the identity of the speaker is not captured. Especially when gathering original footage in the political domain, the identity of the speaker can be the most important query constraint, although this name may not be prominent in the words spoken. It is thus desireable to have this information provided explicitely to the search engine. To provide this information, the archive must be an alyzed by automatic Speaker Identification (SID). While this research topic has seen substantial gains in accuracy and robustness over last years, it has not yet established itself as a helpful, large-scale tool outside the research community. This thesis sets out to establish a workflow to provide automatic speaker identification. Its application is to help journalists searching on speeches given in the German parliament (Bundestag). This is a contribution to the News-Stream 3.0 project, a BMBF funded research project that addresses accessibility of various data sources for journalists.
This work extends the affordance-inspired robot control architecture introduced in the MACS project [35] and especially its approach to integrate symbolic planning systems given in [24] by providing methods to automated abstraction of affordances to high-level operators. It discusses how symbolic planning instances can be generated automatically based on these operators and introduces an instantiation method to execute the resulting plans. Preconditions and effects of agent behaviour are learned and represented in Gärdenfors conceptual spaces framework. Its notion of similarity is used to group behaviours to abstract operators based on the affordance-inspired, function-centred view on the environment. Ways on how the capabilities of conceptual spaces to map subsymbolic to symbolic representations to generate PDDL planning domains including affordance-based operators are discussed. During plan execution, affordance-based operators are instantiated by agent behaviour based on the situation directly before its execution. The current situation is compared to past ones and the behaviour that has been most successful in the past is applied. Execution failures can be repaired by action substitution. The concept of using contexts to dynamically change dimension salience as introduced by Gärdenfors is realized by using techniques from the field of feature selection. The approach is evaluated using a 3D simulation environment and implementations of several object manipulation behaviours.
In der vorliegenden Arbeit wird ein Verfahren zur Segmentierung von Außenszenen und Terrain-Klassifkation entwickelt. Dazu werden 360 Grad-Laserscanner-Aufnahmen von Straßen, Gebäudefassaden und Waldwegen aufgenommen. Von diesen Aufnahmen werden verschiedene visuelle Repräsentationen in 2D erstellt. Dazu werden die Distanzinformationen und Winkelübergänge der Polarkoordinaten, die Remissionswerte und der Normalenvektor eingesetzt. Die Berechnung des Normalenvektors wird über ein modernes Verfahren mit einerniedrigen Laufzeit durchgeführt. Anschließend werden Oberflächeneigenschaften innerhalb einer Punktwolke analysiert und vier Klassen unterschieden: Untergrund, Vegetation, Hindernis und Himmel. Die Segmentierung und Klassifkation geschieht in einem Schritt. Dazuwird die Varianz auf den N ormalen über eine Filtermaske berechnet und ein Deskriptor erstellt. Der Deskriptor beinhaltet die Normalenvektoren und die Normalenvarianz fürdie x-, y- und z-Achse. Die Ergebnisse werden als Überblendung auf dem Remissionsbilddargestellt. Die Auswertung wird über eigens erstellte Ground-Truth-Daten vorgenommen. Dazu wird das Remissionsbild genutzt und der Ground-Truth mit verschiedenen Farben eingezeichnet. Die Klassifkationsergebnisse sind in Precision-Recall-Diagrammen dargestellt.
Scientists and engineers are using a distributed system Remote Component Environment (RCE) to design and simulate complex systems like airplanes, ships and satellites. During the simulation, RCE executes local and remote code. Remote code is classified as untrusted code. The execution of remote code comprises potential security risks for the host system of RCE. Additionally, RCE provides full access to system resources. The objective of this thesis is to implement a sandbox prototype to reduce the vulnerability of RCE during the execution of remote code.
Das Optimalziel für ein Logistiklager ist eine hohe Auslastung des Transportsystems. Es stellt sich somit die Frage nach der Auswahl der Aufträge, die gleichzeitig innerhalb des Lagers abgearbeitet werden, ohne Staus, Blockaden oder Überlastungen entstehen zu lassen. Dieser Auswahlprozess wird auch als Path-Packing bezeichnet. Diese Masterthesis untersucht das Path-Packing auf graphentheoretischer Ebene und stellt verschiedene Greedy-Heuristiken, eine Optimallösung auf Basis der Linearen Programmierung sowie einen kombinierten Ansatz gegenüber. Die Ansätze werden anhand von Messzeiten und Auslastungen unterschiedlich randomisiert erstellter Testdaten ausgewertet.
Semantic Image Segmentation Combining Visible and Near-Infrared Channels with Depth Information
(2015)
Image understanding is a vital task in computer vision that has many applications in areas such as robotics, surveillance and the automobile industry. An important precondition for image understanding is semantic image segmentation, i.e. the correct labeling of every image pixel with its corresponding object name or class. This thesis proposes a machine learning approach for semantic image segmentation that uses images from a multi-modal camera rig. It demonstrates that semantic segmentation can be improved by combining different image types as inputs to a convolutional neural network (CNN), when compared to a single-image approach. In this work a multi-channel near-infrared (NIR) image, an RGB image and a depth map are used. The detection of people is further improved by using a skin image that indicates the presence of human skin in the scene and is computed based on NIR information. It is also shown that segmentation accuracy can be enhanced by using a class voting method based on a superpixel pre-segmentation. Models are trained for 10-class, 3-class and binary classification tasks using an original dataset. Compared to the NIR-only approach, average class accuracy is increased by 7% for 10-class, and by 22% for 3-class classification, reaching a total of 48% and 70% accuracy, respectively. The binary classification task, which focuses on the detection of people, achieves a classification accuracy of 95% and true positive rate of 66%. The report at hand describes the proposed approach and the encountered challenges and shows that a CNN can successfully learn and combine features from multi-modal image sets and use them to predict scene labeling.
The objective of this thesis is to implement a computer game based motivation system for maximal strength testing on the Biodex System 3 Isokinetic Dynamometer. The prototype game has been designed to improve the peak torque produced in an isometric knee extensor strength test. An extensive analysis is performed on a torque data set from a previous study. The torque responses for five second long maximal voluntary contractions of the knee extensor are analyzed to understand torque response characteristics of different subjects. The parameters identifed in the data analysis are used in the implementation of the 'Shark and School of Fish' game. The behavior of the game for different torque responses is analyzed on a different torque data set from the previous study. The evaluation shows that the game rewards and motivates continuously over a repetition to reach the peak torque value. The evaluation also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increase the torque to reach peak torque.
This master thesis describes a supervised approach to the detection and the identification of humans in TV-style video sequences. In still images and video sequences, humans appear in different poses and views, fully visible and partly occluded, with varying distances to the camera, at different places, under different illumination conditions, etc. This diversity in appearance makes the task of human detection and identification to a particularly challenging problem. A possible solution of this problem is interesting for a wide range of applications such as video surveillance and content-based image and video processing. In order to detect humans in views ranging from full to close-up view and in the presence of clutter and occlusion, they are modeled by an assembly of several upper body parts. For each body part, a detector is trained based on a Support Vector Machine and on densely sampled, SIFT-like feature points in a detection window. For a more robust human detection, localized body parts are assembled using a learned model for geometric relations based on Gaussians. For a flexible human identification, the outward appearance of humans is captured and learned using the Bag-of-Features approach and non-linear Support Vector Machines. Probabilistic votes for each body part are combined to improve classification results. The combined votes yield an identification accuracy of about 80% in our experiments on episodes of the TV series "Buffy the Vampire Slayer". The Bag-of-Features approach has been used in previous work mainly for object classification tasks. Our results show that this approach can also be applied to the identification of humans in video sequences. Despite the difficulty of the given problem, the overall results are good and encourage future work in this direction.
Augmented Reality (AR) findet heutzutage sehr viele Anwendungsbereiche. Durch die Überlagerung von virtuellen Informationen mit der realen Umgebung eignet sich diese Technologie besonders für die Unterstützung der Benutzer bei technischen Wartungs- oder Reparaturvorgängen. Damit die virtuellen Daten korrekt mit der realen Welt überlagert werden, müssen Position und Orientierung der Kamera durch ein Trackingverfahren ermittelt werden. In dieser Arbeit wurde für diesen Zweck ein markerloses, modellbasiertes Trackingsystem implementiert. Während einer Initialisierungs-Phase wird die Kamerapose mithilfe von kalibrierten Referenzbildern, sogenannten Keyframes, bestimmt. In einer darauffolgenden Tracking-Phase wird das zu trackende Objekt weiterverfolgt. Evaluiert wurde das System an dem 1:1 Trainingsmodell des biologischen Forschungslabors Biolab, welches von der Europäischen Weltraumorganisation ESA zur Verfügung gestellt wurde.
Für die Durchführung größerer Projekte innerhalb des DLR ist es häufig notwendig, dass sich Wissenschaftler fachübergreifend in Themengebiete einarbeiten müssen. Im Rahmen dieser Einarbeitung führen Wissenschaftler Recherchen in fremden Fachbereichen durch. Das DLR hat zu diesem Zweck das Wissensportal KnowledgeFinder entwickelt. Dieses Framework setzt klassische Suchverfahren zum Auffinden von Informationen in beliebigen Datenbeständen ein. Wenn Wissenschaftler in fremden Fachbereichen recherchieren, dann fällt es ihnen aufgrund des oberflächlichen Einblicks oftmals schwer, zielgerichtet nach Informationen zu suchen. Die im KnowledgeFinder eingesetzten klassischen Suchverfahren, die auf textueller und struktureller Ähnlichkeit basieren, können bei diesen unspezifischen Suchanfragen nur bedingt beim Auffinden von relevanten Informationen helfen. Aufgrund von Mehrdeutigkeiten und unterschiedlichen Kontexten stoße solche Verfahren oftmals an ihre Grenzen. Semantische Technologien haben zum Ziel diesen Mangel zu beheben. Hier wird neben der textuellen und strukturellen Ähnlichkeit zusätzlich die Dimension der Bedeutung betrachtet. In dieser Masterthesis wurde untersucht, ob die Suchergebnisqualität des KnowledgeFinder durch den Einsatz semantischer Technologien verbessert werden kann. Innerhalb einer Machbarkeitsstudie wurde dazu das KnowledgeFinder Framework um semantische Suchverfahren erweitert. Diese Verfahren sollen die fachübergreifende Recherche von DLR-Wissenschaftlern erleichtern, indem sie ihnen helfen, passende Suchergebnisse in den entsprechenden Fachbereichen zu finden.
Distributed systems comprise distributed computing systems, distributed information systems, and distributed pervasive systems. They are often very complex and their implementation is challenging. Intensive and continuous testing is indispensable to ensure reliability and high quality of a distributed system. The testing process should have a high degree of automation, not only on lower levels (i.e. unit and module testing), but also on higher testing levels (e.g. system, integration, and acceptance tests). To achieve automation on higher testing levels virtual infrastructure components (e.g. virtual machines, virtual networks) that are offered as a Service (IaaS) can be employed. The elasticity of on-demand computation resources fits well together with the varying resource demands of automated test execution.
A methodology for automated acceptance testing of distributed systems that uses virtual infrastructure is presented. It is founded on a task-oriented model that is used to abstract concurrency and asynchronous, remote communication in distributed systems. The model is used as groundwork for a domain-specific language that allows expressing tests for distributed systems in the form of scenarios. On the one hand, test scenarios are executable and, therefore, fully automated. On the other hand, test scenarios represent requirements to the system under test making an automated, example-based verification possible.
A prototypical implementation is used to apply the developed methodology in the context of two different case studies. The first case study uses RCE as an example of a distributed, workflow-driven integration environment for scientific computing. The second one uses MongoDB as an example of a document-oriented database system that offers distributed data storage through master-slave replication. The results of the experimental evaluation indicate that the developed acceptance testing methodology is a useful approach to design, build, and execute tests for distributed systems with high quality and a high degree of automation.