Refine
H-BRS Bibliography
- yes (57) (remove)
Departments, institutes and facilities
Document Type
- Master's Thesis (57) (remove)
Year of publication
Keywords
- Active Learning (2)
- Computer Vision (2)
- Emergency support system (2)
- Mobile sensors (2)
- Object Detection (2)
- deep learning (2)
- object detection (2)
- 0-1-Integer-Problem (1)
- 3D-Lokalisierung (1)
- 3D-Scanner (1)
Statins are a group of hypolipidemic drugs that act by competitive inhibition of the HMGR enzyme. They are generally considered effective and safe but claimed to have side effects on skeletal muscles. A molecular side effect of statins is the block of terpene biosynthesis and hence of dolichol involved in N-glycosylation and O-mannosylation of proteins. Defects in O-mannosylation lead to α-dystroglycan (α-DG) hypoglycosylation and a series of hereditary dystroglycanopathies. The current project aims to get insight into molecular pathomechanisms induced by statins in mammalian muscle cells and to unravel a potential link between these effects and statin-induced decreases of α-DG O-mannosylation. The study was based on mass spectrometric proteomics supported by western blot analysis to reveal Rosuvastatin effects on cellular pathways under high (micromolar) or low (nanomolar) conditions. Differential proteomics revealed higher statin effects on muscle cell function in micromolar than nanomolar concentration, which is reached in the patient’s plasma. We demonstrated distinct and partially overlapping patterns of fold-changed proteins under high and low statin conditions. Gene ontology term enrichment (GOTE) analyses of fold-changed proteins revealed cellular pathways related to muscle function and development are affected, even under low statin conditions, typically reached in the patient’s plasma during prophylactic medication.
In dieser Arbeit wird im Rahmen von FFE+, einem internen Projekt des Deutschen Zentrums für Luft- und Raumfahrt, eine entscheidungsbasierte Fertigungsstrategie für die Herstellung einer Mikrogasturbinenblisk aus oxidkeramischem Faserverbundwerkstoff entwickelt. Hierfür soll das vakuumbasierte Infusionsverfahren der Abteilung Struktur- und Funktionskeramik des Instituts für Werksstoffforschung verwendet werden. Zunächst wird der theoretische Hintergrund des Materials und die davon etablierte Verarbeitung betrachtet. Aus Basis dieser Grundlage können das System und Funktionen der oxidkeramischen Blisk im Sinne der methodischen Prozessentwicklung bestimmt werden. Die darin formulierten Anforderungen und Bewertungskriterien lassen eine aufwandsreduzierte Entwurfsphase von Konzepten oder Lösungsprinzipien zu. Hierbei ist die Faserstruktur der maßgeblicher Einflussfaktor in der Lösungsfindung. Nach der Bewertung, Validierung und Anpassung der Ergebnisse wird die Fertigungsstrategie auf dem best-bewerteten Konzept und den bisherigen Projekten der Abteilung entworfen. Zusätzlich ist in dieser Arbeit eine Machbarkeitsstudie am Institut für Flugzeugbau der Universität Stuttgart von einem bislang unbekannten Verfahren zur Herstellung oxidkeramischer Faserpreforms durchgeführt worden. Da eine Aussage über die Materialkennwerte für eine sichere Funktionsgewährleistung notwendig ist, sind Materialversuche bei Raum- und Hochtemperatur geplant. Das abschließende Ziel einer Prozessketten-Grundlage von Projekten mit dem vakuumbasierten Infusionsverfahren des Instituts für Werkstoffforschung fasst die Ergebnisse von dieser Arbeit und anderen Erfahrungsberichten zusammen.
This thesis investigates the benefit of rubrics for grading short answers using an active learning mechanism. Automating short answer grading using Natural Language Processing (NLP) is one of the active research areas in the education domain. This could save time for the evaluator and invest more time in preparing for the lecture. Most of the research on short answer grading was treated as a similarity task between reference and student answers. However, grading based on reference answers does not account for partial grades and does not provide feedback. Also, the grading is automatic that tries to replace the evaluator. Hence, using rubrics for short answer grading with active learning eliminates the drawbacks mentioned earlier.
Initially, the proposed approach is evaluated on the Mohler dataset, popularly used to benchmark the methodology. This phase is used to determine the parameters for the proposed approach. Therefore, the approach with the selected parameter exceeds the performance of current State-Of-The-Art (SOTA) methods resulting in the Pearson correlation value of 0.63 and Root Mean Square Error (RMSE) of 0.85. The proposed approach has surpassed the SOTA methods by almost 4%.
Finally, the benchmarked approach is used to grade the short answer based on rubrics instead of reference answers. The proposed approach evaluates short answers from Autonomous Mobile Robot (AMR) dataset to provide scores and feedback (formative assessment) based on the rubrics. The average performance of the dataset results in the Pearson correlation value of 0.61 and RMSE of 0.83. Thus, this research has proven that rubrics-based grading achieves formative assessment without compromising performance. In addition, the rubrics have the advantage of generalizability to all answers.
Modern engineering relies heavily on utilizing computer technologies. This is especially true for thermoplastic manufacturing, such as blow molding. A crucial milestone for digitalization is the continuous integration of data in unified or interoperable systems. While new simulation technologies are constantly developed, data management standards such as STEP fail at integrating them. On the other hand, industrial standards such as ”VMAP” manage to improve interoperability for Small and Medium-sized Enterprises. However, they do not provide Simulation Process and Data Management (SPDM) technologies. For SPDM integration of VMAP data, Ontology-Based Data Access is used to allow continuing the digital thread in custom semantic-based open-source solutions. An ontology of the database format (VMAP) was generated alongside an expandable knowledge graph of data access methods. A Python-based software architecture was developed, automatically using the semantic representations of database format and data access to query data and metadata within the VMAP file. The result is a software architecture template that can be adapted for other data standards and integrated into semantic data management systems. It allows semantic queries on simulation data down to element-wise resolution without integrating the whole model information. The architecture can instantiate a file in a knowledge graph, query a file’s metadatum and, in case it is not yet available, find a semantically represented process that allows the creation and instantiation of the required metadatum. See Figure 1. The results of this thesis can be expected to form a basis for semantic SPDM tools.
Machine learning-based solutions are frequently adapted in several applications that require big data in operations. The performance of a model that is deployed into operations is subject to degradation due to unanticipated changes in the flow of input data. Hence, monitoring data drift becomes essential to maintain the model’s desired performance. Based on the conducted review of the literature on drift detection, statistical hypothesis testing enables to investigate whether incoming data is drifting from training data. Because Maximum Mean Discrepancy (MMD) and Kolmogorov-Smirnov (KS) have shown to be reliable distance measures between multivariate distributions in the literature review, both were selected from several existing techniques for experimentation. For the scope of this work, the image classification use case was experimented with using the Stream-51 dataset. Based on the results from different drift experiments, both MMD and KS showed high Area Under Curve values. However, KS exhibited faster performance than MMD with fewer false positives. Furthermore, the results showed that using the pre-trained ResNet-18 for feature extraction maintained the high performance of the experimented drift detectors. Furthermore, the results showed that the performance of the drift detectors highly depends on the sample sizes of the reference (training) data and the test data that flow into the pipeline’s monitor. Finally, the results also showed that if the test data is a mixture of drifting and non-drifting data, the performance of the drift detectors does not depend on how the drifting data are scattered with the non-drifting ones, but rather their amount in the test set
This thesis proposes a multi-label classification approach using the Multimodal Transformer (MulT) [80] to perform multi-modal emotion categorization on a dataset of oral histories archived at the Haus der Geschichte (HdG). Prior uni-modal emotion classification experiments conducted on the novel HdG dataset provided less than satisfactory results. They uncovered issues such as class imbalance, ambiguities in emotion perception between annotators, and lack of representative training data to perform transfer learning [28]. Hence, the objectives of this thesis were to achieve better results by performing a multi-modal fusion and resolving the problems arising from class imbalance and annotator-induced bias in emotion perception. A further objective was to assess the quality of the novel HdG dataset and benchmark the results using SOTA techniques. Through a literature survey on the challenges, models, and datasets related to multi-modal emotion recognition, we created a methodology utilizing the MulT along with a multi-label classification approach. This approach produced a considerable improvement in the overall emotion recognition by obtaining an average AUC of 0.74 and Balanced-accuracy of 0.70 on the HdG dataset, which is comparable to state-of-the-art (SOTA) results on other datasets. In this manner, we were also able to benchmark the novel HdG dataset as well as introduce a novel multi-annotator learning approach to understand each annotator’s relative strengths and weaknesses for emotion perception. Our evaluation results highlight the potential benefits of the novel multi-annotator learning approach in improving overall performance by resolving the problems arising from annotator-induced bias and variation in the perception of emotions. Complementing these results, we performed a further qualitative analysis of the HdG annotations with a psychologist to study the ambiguities found in the annotations. We conclude that the ambiguities in annotations may have resulted from a combination of several socio-psychological factors and systemic issues associated with the process of creating these annotations. As these problems are also present in most multi-modal emotion recognition datasets, we conclude that the domain could benefit from a set of annotation guidelines to create standardized datasets.
Object detection concerns the classification and localization of objects in an image. To cope with changes in the environment, such as when new classes are added or a new domain is encountered, the detector needs to update itself with the new information while retaining knowledge learned in the past. Previous works have shown that training the detector solely on new data would produce a severe "forgetting" effect, in which the performance on past tasks deteriorates through each new learning phase. However, in many cases, storing and accessing past data is not possible due to privacy concerns or storage constraints. This project aims to investigate promising continual learning strategies for object detection without storing and accessing past training images and labels. We show that by utilizing the pseudo-background trick to deal with missing labels, and knowledge distillation to deal with missing data, the forgetting effect can be significantly reduced in both class-incremental and domain-incremental scenarios. Furthermore, an integration of a small latent replay buffer can result in a positive backward transfer, indicating the enhancement of past knowledge when new knowledge is learned.
Die vorliegende Arbeit beschäftigt sich mit Unternehmenspodcasts. Ziel dieser Arbeit ist es aktuelle Erkenntnisse über den Entwicklungsstand bei der Konzeption und Produktion von Unternehmenspodcasts zu erhalten. Fokussiert wird sich hierbei auf die Sicht der Kommunikatoren, in Form von Podcast-Agenturen. Es wird untersucht, ob Trends zu erkennen sind, ob bei unterschiedlichen Podcast-Agenturen ein Erfahrungswissen vorliegt und ob Überschneidungen zu erkennen sind. Für die Beantwortung der Fragestellungen wird in dieser Studie eine qualitative Befragung in Form von Experteninterviews durchgeführt.
Auf der einen Seite wird audiovisuellen Medien die Möglichkeit zugeschrieben, ein Abbild der Wirklichkeit zu schaffen – ein Grund dafür, dass sie im Journalismus von zentraler Bedeutung sind. Auf der anderen Seite ermöglichen die technologischen Entwicklungen der letzten Jahre immer einfacher, kostengünstiger und schneller authentisch wirkende Manipulationen zu erstellen. Noch vor zehn Jahren war die Manipulation von Videomaterial, abgesehen von trivialen Operationen auf Bildebene, nur im Rahmen von Filmproduktionen möglich. Das ist inzwischen anders – synthetische Medien, auch als Deepfakes bekannt, sind in aller Munde. So stellen audiovisuelle Manipulationen Redaktionen vor eine zunehmend größere Herausforderung und schaffen es mitunter bereits als vermeintlich authentischer Inhalt in die Berichterstattung. Es stellt sich die Frage: Inwiefern ist und bleibt es möglich, die Authentizität audiovisuellen Materials in Redaktionen sicherzustellen?
Auf der Grundlage von sieben geführten Experteninterviews mit Akteur:innen aus Wissenschaft und Praxis liefert die Arbeit zusätzlich zu einer aktuellen Beschreibung des technischen Sachstandes in Bezug auf Manipulations- und Verifikationsmöglichkeiten eine Beschreibung und Bewertung der existierenden Probleme und potenzieller Lösungen für Redaktionen, sowie eine Einschätzung der zukünftigen Entwicklung relevanter Technologien und den damit verbundenen Auswirkungen. Im Ergebnis zeigt sich, dass technische Hilfsmittel für Verifikationsprozesse in Redaktionen gebraucht werden, es aber kaum möglich ist, allein auf technologischer Ebene die Authentizität audiovisuellen Materials sicherzustellen. Damit einhergehend seien zurzeit nicht fehlende technische Hilfsmittel die größte Herausforderung für Redaktionen bei der Verifikation, sondern vielmehr der Mangel an Zeit.
Interviewt wurden: Dr. Dominique Dresen – Bundesamt für Sicherheit in der Informationstechnik (BSI), Dr. Jutta Jahnel – Karlsruher Institut für Technologie (KIT), Dr. Christian Riess – FAU Erlangen-Nürnberg, Andrea Sauerbier – SPIEGEL, Jochen Spangenberg – u. a. DW Innovation, Johanna Wild – Bellingcat und Dr. Sascha Zmudzinski – Fraunhofer-Institut für Sichere Informationstechnologie (SIT).