Graduierteninstitut
Refine
H-BRS Bibliography
- yes (51)
Departments, institutes and facilities
- Graduierteninstitut (51)
- Fachbereich Angewandte Naturwissenschaften (21)
- Fachbereich Informatik (18)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (5)
- Fachbereich Elektrotechnik, Maschinenbau und Technikjournalismus (2)
- Fachbereich Sozialpolitik und Soziale Sicherung (2)
- Fachbereich Wirtschaftswissenschaften (2)
- Institut für Cyber Security & Privacy (ICSP) (2)
- Institute of Visual Computing (IVC) (2)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (2)
Document Type
- Doctoral Thesis (51)
Year of publication
Has Fulltext
- no (51)
Keywords
- Lignin (3)
- Antioxidans (2)
- Evolutionary optimization (2)
- Human-Computer Interaction (2)
- Quality diversity (2)
- Robotics (2)
- API Gebrauchstauglichkeit (1)
- API usability (1)
- Additiv (1)
- Aerodynamics (1)
Intelligent virtual agents provide a framework for simulating more life-like behavior and increasing plausibility in virtual training environments. They can improve the learning process if they portray believable behavior that can also be controlled to support the training objectives. In the context of this thesis, cognitive agents are considered a subset of intelligent virtual agents (IVA) with the focus on emulating cognitive processes to achieve believable behavior. The complexity of employed algorithms, however, is often limited since multiple agents need to be simulated in real-time. Available solutions focus on a subset of the indicated aspects: plausibility, controllability, or real-time capability (scalability). Within this thesis project, an agent architecture for attentive cognitive agents is developed that considers all three aspects at once. The result is a lightweight cognitive agent architecture that is customizable to application-specific requirements. A generic trait-based personality model influences all cognitive processes, facilitating the generation of consistent and individual behavior. An additional mapping process provides a formalized mechanism to transfer results of psychological studies to the architecture. Personality profiles are combined with an emotion model to achieve situational behavior adaptation. Which action an agent selects in a situation also influences plausibility. An integral element of this selection process is an agent's knowledge about its world. Therefore, synthetic perception is modeled and integrated into the architecture to provide a credible knowledge base. The developed perception module includes a unified sensor interface, a memory hierarchy, and an attention process. With the presented realization of the architecture (CAARVE), it is possible for the first time to simulate cognitive agents, whose behaviors are simultaneously computable in real-time and controllable. The architecture's applicability is demonstrated by integrating an agent-based traffic simulation built with CAARVE into a bicycle simulator for road-safety education. The developed ideas and their realization are evaluated within this work using different strategies and scenarios. For example, it is shown how CAARVE agents utilize personality profiles and emotions to plausibly resolve deadlocks in traffic simulations. Controllability and adaptability are demonstrated in additional scenarios. Using the realization, 200 agents can be simulated in real-time (50 FPS), illustrating scalability. The achieved results verify that the developed architecture can generate plausible and controllable agent behavior in real-time. The presented concepts and realizations provide sound fundamentals to everyone interested in simulating IVA in real-time environments.
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Im Rahmen dieser Arbeit wurden zunächst neuartige ionische Agarosederivate synthetisiert und anschließend umfassend charakterisiert. Anionische Agarosesulfate mit einer regioselektiven Derivatisierung in Position G6 wurden durch homogene Umsetzung in ionischer Flüssigkeit erhalten. Kationische Agarosecarbamate mit einstellbarem Funktionalisierungsgrad waren durch einen zweistufigen Syntheseansatz zugänglich. Hierzu wurden zunächst Agarosephenylcarbonate in einer homogenen Synthese hergestellt, im Anschluss folgte eine Aminolyse zu den gewünschten funktionalen Agarosederivaten. Die ionischen Agarosederivate waren bereits bei geringen Funktionalisierungsgraden vollständig löslich in Wasser. Damit war es möglich, Alginatmikrokapseln polyelektrolytisch zu beschichten und diese als Träger für eine kontrollierte Wirkstofffreisetzung zu verwenden. Ebenfalls konnten Kompositgele aus Agarose, Hydroxyapatit und Agarosederivaten hergestellt und charakterisiert werden. Im zweiten Teil wurden sowohl die Kompositträgermaterialien als auch die Alginatmikrokapseln mit vier verschiedenen Modellwirkstoffen (ATP, Suramin, Methylenblau und A740003) beladen und die Wirkstofffreisetzung über einen Zeitraum von zwei Wochen untersucht. Für die ionischen Modellwirkstoffe erwiesen sich Kompositträgermaterialien mit ionischem Agarosederivat, die beschichteten Mikrokapseln sowie die Kombination aus Komposit und Kapseln als effektiv, um die Freisetzung auf bis zu 40% zu verlangsamen. Für die schlecht wasserlösliche Substanz A740003, ein Rezeptorligand für die osteogene Differenzierung von Stammzellen, wurde eine stark verzögerte Freisetzung aus Polyelektrolytemikrokapseln festgestellt. Mithilfe von literaturbekannten und neu entwickelten Anpassungsmodellen gelang es, die Diffusion als Hauptmechanismus der Wirkstofffreisetzung zu identifizieren und die Freisetzungskurven mathematisch akkurat zu beschreiben und daraus Rückschlüsse über die einzelnen Phasen der Freisetzung zu ziehen.
The human enzymes GLYAT (glycine N-acyltransferase), GLYATL1 (glutamine N-phenylacetyltransferase) and GLYATL2 (glycine N-acyltransferase-like protein 2) are not only important in the detoxification of xenobiotics via the human liver, but are also involved in the elimination of acyl residues that accumulate in the form of their coenzyme A (coA) esters in some rare inborn errors of metabolism. This concerns, for example, disorders in the degradation of branched-chain amino acids, such as isovaleric acidemia or propionic acidemia. In addition, they also assist in the elimination of ammonium, which is produced during the transamination of amino acids and accumulates in urea cycle defects. Sequence variants of the enzymes have also been investigated, which may provide evidence of impaired enzyme activities, from which therapy adjustments can potentially be derived. A modified Escherichia coli strain was chosen for the overexpression and partial biochemical characterization of the enzymes, which may allow solubility and proper folding. Since post-translational protein modifications are very limited in bacteria, we also attempted to overexpress the enzymes in HEK293 cells (human-derived). In addition to characterization via immunoblots and activity assays, intracellular localization of the enzymes was also performed using GFP coupling and confocal laser scanning microscopy in transfected HEK293 cells. The GLYATL2 enzyme may have tasks beyond detoxification and metabolic defects and the preliminary molecular biology work has been performed as part of this project - the enzyme activity determinations were outsourced to a co-supervised bachelor thesis. The enzyme activity determinations with purified recombinant human enzyme from Escherichia coli provided a threefold higher activity of the sequence variant p.(Asn156Ser) for GLYAT, which should be considered as the probably authentic wild type of the enzyme. In addition, a reduced activity of the GLYAT variant p.(Gln61Leu), which is very common in South Africa, was shown, which could be of particular importance in the treatment of isovaleric acidemia, which is also common in South Africa. Intracellularly, GLYAT and GLYATL1 could be localized mitochondrially. As the analyses have shown, sequence variations of GLYAT and GLYATL1 influence their enzyme activity. As an example, the GLYAT variant p.(Gln61Leu) is frequently found in South Africa. In the case of reduced GLYAT activity, patients could be increasingly treated with L-carnitine in the sense of an individualized therapy, since the conjugation of the toxic isovaleryl-coA with glycine is restricted by the GLYAT sequence variation. Activity-reducing variants identified in this project are of particular interest, as they may influence the treatment of certain metabolic defects.
The processing of employee personal data is dramatically increasing. To protect employees' fundamental right to privacy, the law provides for the implementation of privacy controls, including transparency and intervention. At present, however, the stakeholders responsible for putting these obligations into action, such as employers and software engineers, simply lack the fundamental knowledge needed to design and implement the necessary controls. Indeed, privacy research has so far focused mainly on consumer relations in the private context. In contrast, privacy in the employment context is less well studied. However, since privacy is highly context-dependent, existing knowledge and privacy controls from other contexts cannot simply be adopted to the employment context. In particular, privacy in employment is subject to different legal and social norms, which require a different conceptualization of the right to privacy than is usual in other contexts. To adequately address these aspects, there is broad consensus that privacy must be regarded as a socio-technical concept in which human factors must be considered alongside technical-legal factors. Today, however, there is a particular lack of knowledge about human factors in employee privacy. Disregarding the needs and concerns of individuals or lack of usability, though, are common reasons for the failure of privacy and security measures in practice. This dissertation addresses key knowledge gaps on human factors in employee privacy by presenting the results of a total of three in-depth studies with employees in Germany. The results provide insights into employees' perceptions of the right to privacy, as well as their perceptions and expectations regarding the processing of employee personal data. The insights gained provide a foundation for the human-centered design and implementation of employee-centric privacy controls, i.e., privacy controls that incorporate the views, expectations, and capabilities of employees. Specifically, this dissertation presents the first mental models of employees on the right to informational self-determination, the German equivalent of the right to privacy. The results provide insights into employees' (1) perceptions of categories of data, (2) familiarity and expectations of the right to privacy, and (3) perceptions of data processing, data flow, safeguards, and threat models. In addition, three major types of mental models are presented, each with a different conceptualization of the right to privacy and a different desire for control. Moreover, this dissertation provides multiple insights into employees' perceptions of data sensitivity and willingness to disclose personal data in employment. Specifically, it highlights the uniqueness of the employment context compared to other contexts and breaks down the multi-dimensionality of employees' perceptions of personal data. As a result, the dimensions in which employees perceive data are presented, and differences among employees are highlighted. This is complemented by identifying personal characteristics and attitudes toward employers, as well as toward the right to privacy, that influence these perceptions. Furthermore, this dissertation provides insights into practical aspects for the implementation of personal data management solutions to safeguard employee privacy. Specifically, it presents the results of a user-centered design study with employees who process personal data of other employees as part of their job. Based on the results obtained, a privacy pattern is presented that harmonizes privacy obligations with personal data processing activities. The pattern is useful for designing privacy controls that help these employees handle employee personal data in a privacy-compliant manner, taking into account their skills and knowledge, thus helping to protect employee privacy. The outcome of this dissertation benefits a wide range of stakeholders who are involved in the protection of employee privacy. For example, it highlights the challenges to be considered by employers and software engineers when conceptualizing and designing employee-centric privacy controls. Policymakers and researchers gain a better understanding of employees' perceptions of privacy and obtain fundamental knowledge for future research into theoretical and abstract concepts or practical issues of employee privacy. Employers, IT engineers, and researchers gain insights into ways to empower data processing employees to handle employee personal data in a privacy-compliant manner, enabling employers to improve and promote compliance. Since the basic principles underlying informational self-determination have been incorporated into European privacy legislation, we are confident that our results are also of relevance to stakeholders outside Germany.
Remineralizing soils? The agricultural usage of silicate rock powders in the context of One Health
(2022)
The concept of soil health describes the capacity of soil to fulfill essential functions and ecosystem services. Healthy soils are inextricably linked to sustainable agriculture and are crucial for the interconnected health of plants, animals, humans, and their environment ("One Health"). However, soil health is threatened through unprecedented rates of soil degradation. A major form of soil degradation is nutrient depletion, which has been seriously underestimated for potassium (K) and several micronutrients. One way to replenish K and micronutrients are multi-nutrient silicate rock powders (SRPs). Their agronomic suitability has long been questioned due to slow weathering rates, although recent studies found significant soil health improvements and challenge past objections which insufficiently addressed the factorial complexity of the weathering process. Furthermore, environmental co-benefits might arise through their mixture with livestock slurry, which could reduce the slurry’s ammonia (NH3) emissions and improve its biophysicochemical properties. However, neither SRPs effects on soil health, nor the biophysicochemical effects of mixing SRPs with livestock slurry have hitherto been comprehensively analyzed. The overall aim of this dissertation is thus to review the agricultural usage of SRPs in the context of One Health. The first part of this thesis starts with an elaboration of the health concept in general and then explores the interlinkages between soil health and One Health. Subsequently, the potentials and oftentimes bypassed problems of operationalizing soil health will be outlined, and feasible ways for its future usage are proposed. In the second part of the thesis, it is reviewed how and under which circumstances SRPs can ameliorate soil health. This is done by presenting a new framework with the most relevant factors for the usage of SRPs through which several contradictory outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential of SRPs as K and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are inconclusive. The review revealed various co-benefits that could substantially increase SRPs overall agronomic efficiency. The last part of the thesis reports about the effects of mixing two rock powders with cattle slurry. SRPs significantly increased the slurry´s CH4 emission rates, whereas the effects on NH3, CO2, and N2O emission rates were mostly insignificant. The rock powders increased the nutrient content of the slurry and altered its microbiology. In conclusion, the concept of soil health must be operationalized in more specific, practical, and context-dependent ways. Particularly in humid tropical environments, SRPs could advance low-cost soil health ameliorations, and its usage could have additional co-benefits regarding One Health. Mixing SRPs with organic materials like livestock slurry could overcome the major obstacle of their low solubility, although the effects on NH3 and greenhouse gas emissions must be further evaluated.
Typically, plastic packaging materials are produced using additives, like e.g. stabilisers, to introduce specific desired properties into the material or, in case of stabilisers, to prolong the shelf life of such packaging materials. However, those stabilisers are typically fossil-based and can pose risks to both environmental and human health. Therefore, the present study presents more sustainable alternatives based on regional renewable resources which show the relevant antioxidant, antimicrobial and UV absorbing properties to successfully serve as a plastic stabiliser. In the study, all plants are extracted and characterised with regard to not only antioxidant, antimicrobial and UV absorbing effects, but also with regard to additional relevant information like chemical constituents, molar mass distribution, absorbance in the visible range et cetera. The extraction process is furthermore optimised and, where applicable, reasonable opportunities for waste valorisation are explored and analysed. Furthermore, interactions between analysed plant extracts are described and model films based on Poly-Lactic Acid are prepared, incorporating analysed plant extracts. Based on those model films, formulation tests and migration analysis according to EU legislation is conducted.
The well-known aromatic and medicinal plant thyme (Thymus vulgaris L.) includes phenolic terpenoids like thymol and carvacrol which have strong antioxidant, antimicrobial and UV absorbing effects. Analyses show that those effects can be used in both lipophilic and hydrophilic surroundings, that the variant Varico 3 is a more potent cultivar than other analysed thyme variants, and that a passive extraction setup can be used for extract preparation while distillation of the Essential Oils can be a more efficient approach.
Macromolecular antioxidant polyphenols, particularly proanthocyanidins, have been found in the seed coats of the European horse chestnut (Aesculus hippocastanum L.) which are regularly discarded in phytopharmaceutical industry. In this study, such effects and compounds have been reported for the first time while a valorisation of waste materials has been analysed successfully. Furthermore, a passive extraction setup for waste materials and whole seeds has been developed. In extracts of snowdrops, precisely Galanthus elwesii HOOK.F., high concentrations of tocopherol have been found which promote a particularly high antioxidant capacity in lipophilic surroundings. Different coniferous woods (Abies div., Picea div.) which are in use as Christmas trees are extracted after separating the biomass in leafs and wood parts before being analysed regarding extraction optimisation and drought resistance of active substances. Antioxidant and UV absorbing proanthocyanidins are found even in dried biomasses, allowing the circular use of already used Christmas trees as bio-based stabilisers and the production of sustainable paper as a byproduct.
Telogene Einzelhaare sind häufig vorkommende Spurentypen an Tatorten. Derzeit werden sie zumeist von der STR-Typisierung ausgeschlossen, weil ihre STR-Profile aufgrund geringer DNA-Mengen und starker DNA-Degradierung in vielen Fällen unvollständig und schwierig zu interpretieren sind. In der vorliegenden Arbeit wurde eine systematische Vorgehensweise angewandt, um Korrelationen zwischen der DNA-Menge und DNA-Degradierung zu dem Erfolg der STR-Typisierung aufzuweisen und darauf basierend den Typisierungs-Erfolg von DNA aus Haaren vorhersagen zu können.
Zu diesem Zweck wurde ein human- (RiboD) und ein canin-spezifischer (RiboDog) qPCR-basierter Assay zur Messung der DNA-Menge und Bewertung der DNA-Integrität mittels eines Degradierungswerts (D-Wert) entwickelt. Aufgrund der Lage der genutzten Primer, welche auf ubiquitär vorkommende ribosomale DNA-Sequenzen abzielen, ist das Funktionsprinzip schnell und kostengünstig auf unterschiedliche Spezies anzuwenden. Die Funktionsweise der Assays wurde mittels seriell degradierter DNA bestätigt und der humane Assay wurde im Vergleich zum kommerziellen Quantifiler? Trio DNA Quantification Kit validiert. Schließlich wurde mit den Assays an DNA aus telogenen und katagenen Einzelhaaren von Menschen und Hunden der Zusammenhang zwischen DNA-Menge und DNA-Integrität zu der Vollständigkeit der STR-Allele (Allel Recovery) von DNA-Profilen untersucht, die mittels kapillarelektrophoretischer (CE) STR-Kits erhaltenen wurde. Es zeigte sich, dass bei humanen Einzelhaaren die Allel-Recovery sowohl von der DNA-Menge als auch der DNA-Integrität abhängt. Dagegen war die DNA-Degradierung bei einzelnen Hundehaaren durchweg geringer und die Allel-Recovery hing allein von der extrahierten DNA-Menge ab.
Um die STR-Analytik degradierter humaner DNA-Proben weiter zu verbessern, wurde ein neuartiger NGS-basierter Assay (maSTR, Mini-Amplikon-STR) etabliert, der die 16 forensischen STR-Loci des European Standard Sets und Amelogenin als sehr kurze Amplikons (76-296 bp) parallel amplifiziert. Mit intakter DNA generierte der maSTR-Assay im Mengenbereich von 200 pg eingesetzter DNA reproduzierbare, vollständige Profile ohne Allelic Drop-ins. Bei niedrigeren DNA-Mengen traten vereinzelt Allelic Drop-ins auf, wobei unter Verwendung von mindestens 43 pg DNA vollständige Profile erhalten wurden.
Die kombinierte Strategie aus RiboD-Messungen der DNA-Menge und -Integrität und daraus resultierendem STR-Typisierungserfolg des maSTR-Assays wurde an degradierter DNA validiert. Anschließend wurde die Strategie auf DNA aus telogenen und katagenen Einzelhaaren angewandt und mit den Ergebnissen des CE-basierten PowerPlex? ESX 17-Kits verglichen, das dasselbe STR-Marker-Set analysiert. Dabei zeigte sich, dass der Erfolg der STR-Typisierung beider STR-Assays sowohl von der optimalen Menge der Template-DNA als auch von der DNA-Integrität abhängt. Mit dem maSTR-Assay wurden vollständige Profile mit ungefähr 50 pg Input-DNA für leicht degradierte DNA aus Einzelhaaren nachgewiesen, sowie mit ungefähr 500 pg stark degradierter DNA. Aufgrund der geringen DNA-Mengen von telogenen Einzelhaaren schwankte die Reproduzierbarkeit der maSTR-Ergebnisse, war jedoch stets dem PowerPlex? ESX 17-Kit in Bezug auf die Allel-Recovery überlegen.
Ein Vergleich mit zwei, hinsichtlich der Längenverteilung der Amplikons komplementären CE-basierten STR-Kits (PowerPlex? ESX 17 und ESI 17 Fast), sowie mit einem kommerziellen NGS-Kit (ForenSeq? DNA Signature Prep) ergab, dass nicht die Technik der NGS, sondern die Kürze der Amplikons der wichtigste Faktor zur Typisierung degradierter DNA ist. Der maSTR-Assay wies in allen Vergleichen mit den genutzten kommerziellen Kits jedoch eine höhere Anzahl an Allelic Drop-ins auf. Diese traten umso häufiger auf, je geringer die verwendete DNA-Menge und je stärker degradiert diese war.
Weil Profile mit Allelic Drop-ins Mischprofilen entsprechen, wurden die per maSTR-Assay generierten STR-Profile mit Verfahren zur Interpretation von Mischspuren untersucht. Bei der Composite-Interpretation werden alle vorkommenden Allele von Replikaten gezählt, bei der Consensus-Interpretation lediglich die reproduzierbaren Allele. Dabei stellte sich heraus, dass im Fall von wenigen Allelic Drop-ins (PowerPlex? ESX 17-generierte Profile) die Composite-Interpretation und bei Allelic Drop-in-haltigen Profilen (maSTR-generierte Profile) die Consensus-Interpretation am besten geeignet ist.
Schließlich wurde mittels der GenoProof Mixture 3-Software untersucht, inwieweit semi- und vollständig kontinuierliche probabilistische Verfahren bei der biostatistischen Bewertung der DNA-Profile aus Einzelhaaren geeignet sind. Dabei zeigte sich, dass der maSTR-Assay aufgrund der hohen Anzahl an Allelic Drop-ins den CE-basierten Methoden nur in Fällen von DNA leicht überlegen ist, die in ausreichender Menge und gering degradiert vorliegt. In diesem Bereich gelingt die Zuordnung des Profils aus Haaren zum Referenzprofil jedoch ebenfalls mittels CE-basierten Methoden.
Aus allen Ergebnissen wurde eine Empfehlung für die Handhabung von DNA aus ausgefallenen Einzelhaaren abgeleitet, die auf dem DNA-Degradierungsgrad in Kombination mit der DNA-Menge basiert. Die vorliegende Arbeit schafft somit eine Grundlage, um ausgefallene Einzelhaare in der Routine-Arbeit von kriminaltechnischen Ermittlungen nutzbar zu machen, sowie gegebenenfalls auf andere Spurentypen mit degradierter DNA geringer Menge anzuwenden. Dadurch könnte die Nutzbarkeit solcher Spurentypen für die forensische Kriminalistik erhöht werden, insbesondere wenn die standardmäßig verwendeten CE-basierten Methoden versagen. Perspektivisch ist die Technik der NGS aufgrund der großen Multiplexierbarkeit uniformer, kurzer Marker generell der CE-basierten Technik bei der Typisierung degradierter DNA überlegen.
Collaboration among multiple users on large screens leads to complicated behavior patterns and group dynamics. To gain a deeper understanding of collaboration on vertical, large, high-resolution screens, this dissertation builds on previous research and gains novel insights through new observational studies. Among other things, the collected results reveal new patterns of collaborative coupling, suggest that territorial behavior is less critical than shown in previous research, and demonstrate that workspace awareness can also negatively affect the effectiveness of individual users.
In this thesis it is posed that the central object of preference discovery is a co-creative process in which the Other can be represented by a machine. It explores efficient methods to enhance introverted intuition using extraverted intuition's communication lines. Possible implementations of such processes are presented using novel algorithms that perform divergent search to feed the users' intuition with many examples of high quality solutions, allowing them to take influence interactively. The machine feeds and reflects upon human intuition, combining both what is possible and preferred. The machine model and the divergent optimization algorithms are the motor behind this co-creative process, in which machine and users co-create and interactively choose branches of an ad hoc hierarchical decomposition of the solution space.
The proposed co-creative process consists of several elements: a formal model for interactive co-creative processes, evolutionary divergent search, diversity and similarity, data-driven methods to discover diversity, limitations of artificial creative agents, matters of efficiency in behavioral and morphological modeling, visualization, a connection to prototype theory, and methods to allow users to influence artificial creative agents. This thesis helps putting the human back into the design loop in generative AI and optimization.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
At the end of 2019, about 4.1 billion people on earth were using the internet. Because people entrust their most intimate and private data to their devices, the European legislation has declared the protection of natural persons in relation to the processing of personal data as a fundamental right. In 2018 23 million people worldwide, having the responsibility of implementing data security and privacy, were developing software. However, the implementation of data and application security is a challenge, as evidenced by over 41 thousand documented security incidents in 2019. Probably the most basic, powerful, and frequently used tools software developers work with are Application Programming Interfaces (APIs). Security APIs are essential tools to bring data and application security into software products. However, research results have revealed that usability problems of security APIs lead to insecure API use during development. Basic security requirements such as securely stored passwords, encrypted files or secure network connections can become an error-prone challenge and in consequence lead to unreliable or missing security and privacy. Because software developers hold a key position in the development processes of software, not properly operating security tools pose a risk to all people using software. However, little is known about the requirements of developers to address the problem and improve the usability of security APIs. This thesis is one of the first to examine the usability of security APIs. To this end, the author examines to what extent information flows can support software developers in using security APIs to implement secure software by conducting empirical studies with software developers. This thesis has contributed fundamental results that can be used in future work to identify and improve important information flows in software development. The studies have clearly shown that developer-tailored information flows with adapted security-relevant content have a positive influence on the correct implementation of security. However, the results have also led to the conclusion that API producers need to pay special attention to the channels through which they direct information flows to API users and how the information is designed to be useful for them. In many cases, it is not enough to provide security-relevant information via the documentation only. Here, proactive methods like the API security advice proposed by this thesis achieve significantly better results in terms of findability and actionable support. To further increase the effectiveness of the API security advice, this thesis developed a cryptographic API warning design for the terminal by adopting a participatory design approach with experienced software developers. However, it also became clear that a single information flow can only support up to a certain extent. As observed from two studies conducted in complex API environments in web development, multiple complementary information flows have to meet the extensive information needs of developers to be able to develop secure software. Some evaluated new approaches provided promising insights towards more API consumer-focused documentation designs as a complement to API warnings.
Despite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints.
Since its advent, the sustainability effects of the modern sharing economy have been the subject of controversial debate. While its potential was initially discussed in terms of post-ownership development with a view to decentralizing value creation and increasing social capital and environmental relief through better utilization of material goods, critics have become increasingly loud in recent years. Many people hoped that carsharing could lead to development away from ownership towards flexible use and thus more resource-efficient mobility. However, carsharing remains niche, and while many people like the idea in general, they appear to consider carsharing to not be advantageous as a means of transport in terms of cost, flexibility, and comfort. A key innovation that could elevate carsharing from its niche existence in the future is autonomous driving. This technology could help shared mobility gain a new boost by allowing it to overcome the weaknesses of the present carsharing business model. Flexibility and comfort could be greatly enhanced with shared autonomous vehicles (SAVs), which could simultaneously offer benefits in terms of low cost, and better use of time without the burden of vehicle ownership. However, it is not the technology itself that is sustainable; rather, sustainability depends on the way in which this technology is used. Hence, it is necessary to make a prospective assessment of the direct and indirect (un)sustainable effects before or during the development of a technology in order to incorporate these findings into the design and decision-making process. Transport research has been intensively analyzing the possible economic, social, and ecological consequences of autonomous driving for several years. However, research lacks knowledge about the consequences to be expected from shared autonomous vehicles. Moreover, previous findings are mostly based on the knowledge of experts, while potential users are rarely included in the research. To address this gap, this thesis contributes to answering the questions of what the ecological and social impacts of the expected concept of SAVs will be. In my thesis, I study in particular the ecological consequences of SAVs in terms of the potential modal shifts they can induce as well as their social consequences in terms of potential job losses in the taxi industry. Regarding this, I apply a user-oriented, mixed-method technology assessment approach that complements existing, expert-oriented technology assessment studies on autonomous driving that have so far been dominated by scenario analyses and simulations. To answer the two questions, I triangulated the method of scenario analysis and qualitative and quantitative user studies. The empirical studies provide evidence that the automation of mobility services such as carsharing may to a small extent foster a shift from the private vehicle towards mobility on demand. However, findings also indicate that rebound effects are to be expected: Significantly more users are expected to move away from the more sustainable public transportation, leading to an overcompensation of the positive modal shift effects by the negative modal shift effects. The results show that a large proportion of the taxi trips carried out can be re-placed by SAVs, making the profession of taxi driver somewhat obsolete. However, interviews with taxi drivers themselves revealed that the services provided by the drivers go beyond mere transport, so that even in the age of SAVs, the need for human assistance will continue – though to a smaller extent. Given these findings, I see action potential at different levels: users, mobility service providers, and policymakers. Regarding environmental and social impacts resulting from the use of SAVs, there is a strong conflict of objectives among users, potential SAV operators, and sustainable environmental and social policies. In order to strengthen the positive effects and counteract the negative effects, such as unintended modal shifts, policies may soon have to regulate the design of SAVs and their introduction. A key starting point for transport policy is to promote the use of more environmentally friendly means of transport, in particular by making public transportation attractive and, if necessary, by making the use of individual motorized mobility less attractive. The taxi industry must face the challenges of automation by opening up to these developments and focusing on service orientation – to strengthen the drivers’ main unique selling point compared to automated technology. Assessing the impacts of the not-yet-existing generally involves great uncertainty. With the results of my work, however, I would like to argue that a user-oriented technology assessment can usefully complement the findings of classic methods of technology assessment and can iteratively inform the development process regarding technology and regulation.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
For a sustainable development the electricity sector needs to be decarbonized. In 2017 only 54% of the West African households had access to the electrical grid. Thus, renewable sources should play a major role for the development of the power sector in West Africa. Above all, solar power shows highest potential of renewable energy sources. However, it is highly variable, depending on the atmospheric conditions. This study addresses the challenges for a solar based power system in West Africa by analyzing the atmospheric variability of solar power. For this purpose, two aspects are investigated. In the first part, the daily power reduction due to atmospheric aerosols is quantified for different solar power technologies. Meteorological data at six ground-based stations is used to model photovoltaic and parabolic trough power during all mostly clear-sky days in 2006. A radiative transfer model is combined with solar power model. The results show, that the reduction due to aerosols can be up to 79% for photovoltaic and up to 100% for parabolic trough power plants during a major dust outbreak. Frequent dust outbreaks occurring in West Africa would cause frequent blackouts if sufficient storage capacities are not available. On average, aerosols reduce the daily power yields by 13% to 22% for photovoltaic and by 22% to 37% for parabolic troughs. For the second part, long-term atmospheric variability and trends of solar irradiance are analyzed and their impact on photovoltaic yields is examined for West Africa. Based on a 35-year satellite data record (1983 - 2017) the temporal and spatial variability and general trend are depicted for global and direct horizontal irradiances. Furthermore, photovoltaic yields are calculated on a daily basis. They show a strong meridional gradient with highest values of 5 kWh/kWp in the Sahara and Sahel zone and lowest values in southern West Africa (around 4 kWh/kWp). Thereby, the temporal variability is highest in southern West Africa (up to around 18%) and lowest in the Sahara (around 4.5%). This implies the need of a North-South grid development, to feed the increasing demand on the highly populated coast by solar power from the northern parts of West Africa. Additionally, global irradiances show a long-term positive trend (up to +5 W/m²/decade) in the Sahara and a negative trend (up to -5 W/m²/decade) in southern West Africa. If this trend is continuing, the spatial differences in solar power potential will increase in the future. This thesis provides a better understanding of the impact of atmospheric variability on solar power in a challenging environment like West Africa, characterized by the strong influence of the African monsoon. Thereby, the importance of aerosols is pointed out. Furthermore, long-term changes of irradiance are characterized concerning their implications for photovoltaic power.
Bedingt durch die zunehmende Rohstoffknappheit rückt die Suche nach alternativen, nachhaltigen Rohstoffen immer mehr in den Vordergrund. Im Hinblick auf effiziente chemische Verwertbarkeit bietet Lignin zahlreiche Vorteile für verschiedene Anwendungsbereiche, beispielsweise für biobasierte Polyurethanbeschichtungen, etwa zum Korrosionsschutz. Wesentliche Probleme bei der Verwendung von Lignin ergeben sich durch die Heterogenität dieses Naturstoffes sowie durch dessen geringe Polymerisations-Kompatibilität mit Polyolefinen; beide Faktoren beeinflussen u. a die mechanischen Eigenschaften entsprechender Lignin-basierter Polymere. Zudem hängt die konkrete Struktur und damit auch die physikalisch/chemischen Eigenschaften des Lignins stark von der jeweiligen Rohstoffquelle sowie dem Extraktionsverfahren ab.
Ziel dieser Arbeit war die Strukturaufklärung unmodifizierter und modifizierter Kraft-Lignine (KL) und die Untersuchung der Reaktivität aromatischer wie aliphatischer Hydroxygruppen in Abhängigkeit vom pH-Wert. Hierzu wurden unmodifizierte KL aus Schwarzlauge extrahiert und nachfolgend zunächst einer Soxhlet-Extraktion unterzogen, um in Methyltetrahydrofuran lösliche Lignin-Bestandteile – vornehmlich mit aromatischem Charakter – zu gewinnen und so eine verbesserte Löslichkeit auch im bei der nachfolgenden Polyurethansynthese als Lösemittel verwendeten THF zu gewährleisten. Überdies wurden die extrahierten KL via Demethylierung von Methoxygruppen chemisch modifiziert. Zudem wurde mittels nasschemischer Methoden sowie mit differentieller UV/VIS-Spektroskopie die Anzahl an für die Polymerisation erforderliche Hydroxygruppen quantifiziert. Im Anschluss erfolgte, unter besonderer Berücksichtigung ökologischer und ökonomischer Nachhaltigkeitsaspekte, die Synthese Lignin-basierter und funktionalisierter Polyurethanbeschichtungen. Die Oberflächenfunktionalisierung gestattete die Verbesserung der Oberflächenhomogenität sowie - via blend formation - das Einbetten von TPM-Farbstoffen in die Coatings. Hinsichtlich des Einflusses des bei der Extraktion gewählten pH-Wertes (pH = 2 - 5) auf das Verhalten der so gewonnenen KL wurde eine Veränderung sowohl der Struktur der Lignine als auch deren thermischer Stabilität beobachtet. Zudem wurde nachgewiesen, dass mit steigendem pH-Wert die Funktionalität/Reaktivität der aromatischen wie aliphatischen Hydroxygruppen im Lignin zunimmt. Aus unmodifiziertem KL wurden erfolgreich homogene Lignin-basierte Polyurethan-Coatings (LPU-Coatings) synthetisiert; diese LPU-Coatings zeigten bei Verwendung von bei höheren pH-Werten extrahierten KL homogenere, hydrophobe Oberflächenbeschaffenheit sowie gute thermische Stabilität. Zusätzliche Modifizierung der KL durch Demethylierung führte wegen der gesteigerten Anzahl freier Hydroxygruppen zu moderater Reaktivitätssteigerung und damit zu weiterer Verbesserung der Oberflächeneigenschaften hinsichtlich einer homogenen Oberflächenstruktur und -brillanz. Im Hinblick auf den Aspekt der Nachhaltigkeit wurden durch Syntheseoptimierung - bestehend aus Einstellung der Rohstoff-Korngröße, Ultraschallbehandlung und Verwendung des kommerziellen trifunktionellen Polyetherpolyols Lupranol® 3300 in Kombination mit Desmodur® L75 - die Löslichkeit von Lignin im Polyol sowie die thermische Stabilität der LPU-Coatings erhöht. Im Zuge der Syntheseoptimierungen konnte durch verkürzte Trocknungszeiten Energieeinsparung erzielt werden; zudem ließen sich dabei die eingesetzten Mengen kommerziell erhältlicher Chemikalien verringern; beide Einsparungen führten zu Kostenreduktion. Zugleich ließ sich so nicht nur der KL-Anteil im Polymer-Coating erhöhen: Durch eine optimierte wirtschaftliche Einstufensynthese ließ sich die Umsetzung dieser Vorgehensweise auch im Rahmen industrieller Anwendungen vereinfachen. Das Einbetten ausgewählter TPM-Farbstoffe (Kristallviolett und Brilliantgrün) in die LPU-Coatings durch blend formation führte nachweislich zu antimikrobieller Wirkung der Oberflächenbeschichtung, ohne dass die Oberflächenbeschaffenheit an Homogenität verlor. Die im Rahmen dieser Arbeit synthetisierten LPU-Coatings könnten zukünftig als Korrosionsschutz- und antimikrobielle-Beschichtungen ihre Anwendung finden, z. B. in der Landwirtschaft und im Bausektor.
Die im Rahmen der vorliegenden Arbeit gewonnen Erkenntnisse liefern einen Beitrag zur strukturellen Aufklärung des komplexen Biopolymers Lignin. Darüber hinaus stellen die Untersuchungen und Ergebnisse eine Grundlage für eine nachhaltige Herstellung von Lignin-basierten Polymerbeschichtungen dar, die in Zukunft immer mehr an Bedeutung gewinnen werden.
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning.
Discrimination and classification of eight strains related to meat spoilage microorganisms commonly found in poultry meat were successfully carried out using two dispersive Raman spectrometers (Microscope and Portable Fiber-Optic systems) in combination with chemometric methods. Principal Components Analysis (PCA) and Multi-Class Support Vector Machines (MC-SVM) were applied to develop discrimination and classification models. These models were certified using validation data sets which were successfully assigned to the correct bacterial genera and even to the right strain. The discrimination of bacteria down to the strain level was performed for the pre-processed spectral data using a 3-stage model based on PCA. The spectral features and differences among the species on which the discrimination was based were clarified through PCA loadings. In MC-SVM the pre-processed spectral data was subjected to PCA and utilized to build a classification model. When using the first two components, the accuracy of the MC-SVM model was 97.64% and 93.23% for the validation data collected by the Raman Microscope and the Portable Fiber-Optic Raman system, respectively. The accuracy reached 100% for the validation data by using the first eight and ten PC’s from the data collected by Raman Microscope and by Portable Fiber-Optic Raman system, respectively. The results reflect the strong discriminative power and the high performance of the developed models, the suitability of the pre-processing method used in this study and that the low accuracy of the Portable Fiber-Optic Raman system does not adversely affect the discriminative power of the developed models.
Neben der individuellen Bedeutung von Gesundheit für jeden Menschen, steigt auch die Relevanz von „gesunden Beschäftigten“. Gerade in Zeiten von Vollbeschäftigung, Fachkräftemangel und höherem Renteneintrittsalter, rückt die Gesundheit der Beschäftigten und die damit verbundene Arbeitsfähigkeit jedes Einzelnen stärker in den Fokus. Staat, Sozialversicherungsträger und Unternehmen sind zunehmend daran interessiert, Arbeitsplätze und Arbeitsbedingungen gesundheitsförderlich zu gestalten. Hierbei bildet die BGF den Rahmen für die existierenden gesundheitsförderlichen Interventionen, die in einer Vielzahl im betrieblichen Setting vorzufinden sind. Die Arbeitspause kann in diesem Kontext als geeignete Intervention angesehen werden, die jedoch sehr vielfältig in der Ausgestaltung sein kann.
In forensic DNA profiling, the occurrence of complex mixed profiles is currently a common issue. Cases involving intimate swabs or skin flake tape liftings are prone to mixed profiles, because of more than one donor contributing to a DNA sample. By DNA profiling of single spermatozoa and skin flakes, problems associated with mixed profile could ideally be overcome. However, PCR is not a sensitive enough method to generate DNA profiles by STRs on single cells. Moreover, high quality intact DNA is required, but is not always available in skin flakes due to degradation. Additionally, single skin flakes are difficult to discriminate from other similar looking particles on tape liftings used to secure DNA samples from evidence. The main purpose of this study was to develop a method that enables DNA profiling of single sperm cells and skin flakes. After studying multiple whole genome amplification (WGA) protocols, REPLI-g Single Cell WGA was selected due to its suitability in the pre-amplification step of template DNA. Micromanipulation was used to isolate single spermatozoa. Furthermore, micromanipulation in combination with REPLI-g Single Cell WGA resulted in successful DNA profiling of single spermatozoa by using autosomal STRs as well as X- and Y-chromosomal STRs. The single spermatozoa DNA profiling method described in this thesis was successfully used to identify male contributors from mock intimate swabs with a mixture of semen from multiple male contributors. Different dyes were analysed to develop a staining method to discriminate skin flakes from other particles including particles such as those from hair cosmetic products. From all dyes tested, Orange G was the only dye which successfully discriminated skin flakes from hair product particles. Also, an alkaline based lysis protocol was developed that allowed PCR to be carried out directly on the lysates of single skin flakes. Furthermore, REPLI-g Single Cell WGA was tested on single skin flakes. In contrast to the single spermatozoa, REPLI-g Single Cell WGA was not successful in DNA profiling of single skin flakes. The single skin flake DNA profiling method described in this thesis was successfully used in correctly identifying contributors from mock mixed DNA evidence. Additionally, a small amplicon-based NGS method was tested on single skin flakes. Compared to the PCR and CE approach, the small amplicon-based NGS method improved DNA profiling of single skin flakes, giving a significant increase in allele recovery. In conclusion, this study shows circumventing mixtures is possible by DNA profiling of single spermatozoa, using micromanipulation and WGA. Furthermore, DNA profiling of single skin flakes has been improved by the staining of tape liftings methodology with Orange G, alkaline lysis, direct-PCR and a small amplicon-based NGS approach. Nonetheless, future work is required to assess the performance of the single spermatozoa method on mock swabs with more diluted semen. Also, commercially available NGS kits should be tested with single skin flakes and compared with the in-house NGS method.
Due to the popularity of the Internet and the networked services that it facilitates, networked devices have become increasingly common in both the workplace and everyday life in recent years—following the trail blazed by smartphones. The data provided by these devices allow for the creation of rich user profiles. As a result, the collection, processing and exchange of such personal data have become drivers of economic growth. History shows that the adoption of new technologies is likely to influence both individual and societal concepts of privacy. Research into privacy has therefore been confronted with continuously changing concepts due to technological progress. From a legal perspective, privacy laws that reflect social values are sought. Privacy enhancing technologies are developed or adapted to take account of technological development. Organizations must also identify protective measures that are effective in terms of scalability and automation. Similarly, research is being conducted from the perspective of Human-Computer Interaction (HCI) to explore design spaces that empower individuals to manage their protection needs with regard to novel data, which they may perceive as sensitive. Taking such an HCI perspective with regard to understanding privacy management on the Internet of Things (IoT), this research mainly focuses on three interrelated goals across the fields of application: 1. Exploring and analyzing how people make sense of data, especially when managing privacy and data disclosure; 2. Identifying, framing and evaluating potential resources for designing sense-making processes; and 3. Exploring the fitness of the identified concepts for inclusion in legal and technical perspectives on supporting decisions regarding privacy on the IoT. Although this work's point of departure is the HCI perspective, it emphasizes the importance of the interrelationships among seemingly independent perspectives. Their interdependence is therefore also emphasized and taken into account by subscribing to a user-centered design process throughout this study. More specifically, this thesis adopts a design case study approach. This approach makes it possible to conduct full user-centered design lifecycles in a concrete application case with participants in the context of everyday life. Based on this approach, it was possible to investigate several domains of the IoT that are currently relevant, namely smart metering, smartphones, smart homes and connected cars. The results show that the participants were less concerned about (raw) data than about the information that could potentially be derived from it. Against the background of the constant collection of highly technical and abstract data, the content of which only becomes visible through the application of complex algorithms, this study indicates that people should learn to explore and understand these data flexibly, and provides insights in how to design for supporting this aim. From the point of view of design for usable privacy protection measures, the information that is provided to users about data disclosure should be focused on the consequences thereof for users' environments and life. A related concept from law is “informed consent,” which I propose should be further developed in order to implement usable mechanisms for individual privacy protection in the era of the IoT. Finally, this thesis demonstrates how research on HCI can be methodologically embedded in a regulative process that will inform both the development of technology and the drafting of legislation.
Solving differential-algebraic equations (DAEs) efficiently by means of appropriate numerical schemes for time-integration is an ongoing topic in applied mathematics. In this context, especially when considering large systems that occur with respect to many fields of practical application effective computation becomes relevant. In particular, corresponding examples are given when having to simulate network structures that consider transport of fluid and gas or electrical circuits. Due to the stiffness properties of DAEs, time-integration of such problems generally demands for implicit strategies. Among the schemes that prove to be an adequate choice are linearly implicit Rung-Kutta methods in the form of Rosenbrock-Wanner (ROW) schemes. Compared to fully implicit methods, they are easy to implement and avoid the solution of non-linear equations by including Jacobian information within their formulation. However, Jacobian calculations are a costly operation. Hence, necessity of having to compute the exact Jacobian with every successful time-step proves to be a considerable drawback. To overcome this drawback, a ROW-type method is introduced that allows for non-exact Jacobian entries when solving semi-explicit DAEs of index one. The resulting scheme thus enables to exploit several strategies for saving computational effort. Examples include using partial explicit integration of non-stiff components, utilizing more advantageous sparse Jacobian structures or making use of time-lagged Jacobian information. In fact, due to the property of allowing for non-exact Jacobian expressions, the given scheme can be interpreted as a generalized ROW-type method for DAEs. This is because it covers many different ROW-type schemes known from literature. To derive the order conditions of the ROW-type method introduced, a theory is developed that allows to identify occurring differentials and coefficients graphically by means of rooted trees. Rooted trees for describing numerical methods were originally introduced by J.C. Butcher. They significantly simplify the determination and definition of relevant characteristics because they allow for applying straightforward procedures. In fact, the theory presented combines strategies used to represent ROW-type methods with exact Jacobian for DAEs and ROW-type methods with non-exact Jacobian for ODEs. For this purpose, new types of vertices are considered in order to describe occurring non-exact elementary differentials completely. The resulting theory thus automatically comprises relevant approaches known from literature. As a consequence, it allows to recognize order conditions of familiar methods covered and to identify new conditions. With the theory developed, new sets of coefficients are derived that allow to realize the ROW-type method introduced up to orders two and three. Some of them are constructed based on methods known from literature that satisfy additional conditions for the purpose of avoiding effects of order reduction. It is shown that these methods can be improved by means of the new order conditions derived without having to increase the number of internal stages. Convergence of the resulting methods is analyzed with respect to several academic test problems. Results verify the theory determined and the order conditions found as only schemes satisfying the order conditions predicted preserve their order when using non-exact Jacobian expressions.
The present thesis elucidates the development of (i) a series of small molecule inhibitors reacting in a covalent-irreversible manner with the targeted proteases and (ii) a fluorescently labeled activity-based probe as a pharmacological tool compound for investigation of specific functions of the mentioned enzymes in vitro. Herein, the rational design, organic synthesis and quantitative structure-activity-relationships are described extensively.
The globalisation and the increasing international trade have raised the number and risk of introduction of foreign species and invasive pests for years. Although native species have adapted to the native habitat over many years and generations, invasive intruders often possess characteristics that are superior to native species. Thus, and because of a lack of natural enemies, they bear the potential of decimation or complete displacement of the native species; furthermore, the introduction of pathogens or nematodes as a vector possesses a high damage potential. The available measures of the local plant protection services to combat invasive species are confined. They are limited to the felling of infested trees or plants and regular controls within the infested area. A spread of single infestations can thereby be prevented, but undetected infestations can unimpededly spread, which points out the main challenge: the detection of the species. This concerns the infestation in open land as well as the single animal on its path of introduction. Concerning the development of new adequate detection systems for invasive species, there is only little research activity going on. For other fields like detection of explosives or narcotics, the research activities date back for more than one decade and consequently there are detection systems available, which are, for example, used for explosive detection in airports. The detection principle bases on the chemistry of these substances.
Evaluation and Optimization of IEEE802.11 multi-hop Backhaul Networks with Directional Antennas
(2020)
A major problem for rural areas is the inaccessibility to affordable broadband Internet connections. In these areas distances are large, and digging a cable into the ground is extremely expensive, considering the small number of potential customers at the end of that cable. This leads to a digital divide, where urban areas enjoy a high-quality service at low cost, while rural areas suffer from the reverse.
This work is dedicated to an alternative technical approach aiming to reduce the cost for Internet Service Provider in rural areas: WiFi-based Long Distance networks. A set of significant contributions of technology related aspects of WiFi-based Long Distance networks is described in three different fields: Propagation on long distance Wi-Fi links, MAC-layer scheduling and Interference modeling and Channel Assignment with directional antennas.
For each field, the author composes and discusses the state-of-the-art. Afterwards, the author derives research questions and tackles several open issues to develop these kinds of networks further towards a suitable technology for the backhaul segment.
In this thesis, unique administrative data, a relevant time of follow-up and advanced statistical measures to handle confounding have been utilized in order to provide new and informative evidence on the effects of vocational rehabilitation programs on work participation outcomes in Germany. While re-affirming the important role of micro-level determinants, the present study provides an extensive example of the individual and fiscal effects that are possible through meaningful vocational rehabilitation measures. The analysis showed that the principal objective, namely, to improve participation in employment, was generally achieved. Contrary to the common misconception that “off-the-job training” is relatively ineffective, this thesis has provided an empirical example of the positive impact of the programs.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
The initially large number of variants is reduced by applying custom variant annotation and filtering procedures. This requires complex software toolchains to be set up and data sources to be integrated. Furthermore, increasing study sizes subsequently require higher efforts to manage datasets in a multi-user and multi-institution environment. It is common practice to expect numerous iterations of continuative respecification and refinement of filter strategies, when the cause for a disease or phenotype is unknown. Data analysis support during this phase is fundamental, because handling the large volume of data is not possible or inadequate for users with limited computer literacy. Constant feedback and communication is necessary when filter parameters are adjusted or the study grows with additional samples. Consequently, variant filtering and interpretation becomes time-consuming and hinders a dynamic and explorative data analysis by experts.
Pseudopotential (PP)-basierte Lattice-Boltzmann-Methoden werden zunehmend für die Simulation von Mehrphasenströmungen eingesetzt. Da sie auf einem phänomenologischen Ansatz basieren, ist ihr Einsatz mit einem hohen Modellierungsaufwand verbunden. Zudem entstehen an den Phasengrenzen sogenannte Scheingeschwindigkeiten, welche Genauigkeit und numerische Stabilität beeinträchtigen. Daher werden PP-Modelle in dieser Arbeit um drei neue Aspekte erweitert. Erstens wird gezeigt, dass bei der Modellierung unterschiedlicher Kontaktwinkel mit gängigen Methoden in Kombination mit verbesserten Kräfteschemata Scheintröpfchen entstehen. Diese werden durch einen neuartigen Ansatz eliminiert, der auf zusätzlichen Randbedingungen für alle Wechselwirkungskräfte basiert. Diese Technik verhindert nicht nur das Auftreten der Scheintröpfchen, sondern erhöht auch die Stabilität in wandgebundenen Strömungen. Zweitens wird ein neuartiges Verfahren zur Reduktion von Scheingeschwindigkeiten eingeführt. Dabei wird die Diskretisierung der Interaktionskräfte erweitert und die zusätzlichen, freien Koeffizienten in Simulationen statischer Tropfen numerisch optimiert. Die resultierende Diskretisierung wurde in Simulationen stationärer und dynamischer Testfälle validiert, wobei Scheingeschwindigkeiten deutlich reduziert werden konnten. Drittens und letztens wurden die Diffusionseigenschaften in Mehrstoffsystemen detailliert untersucht, wobei eine kritische Abhängigkeit zwischen den makroskopischen Diffusionskoeffizienten und dem Kräfteschema aufgezeigt wird. Diese Analyse bildet die Grundlage für den Vergleich und die zukünftige Entwicklung neuer Potentialfunktionen (für Mehrstoffsysteme) und reduziert den Modellierungsaufwand.
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to external faults which occur because of an infinite number of unforeseeable and unmodelled situations. Through investigating the most frequent failures in typical scenarios which have been observed in real-world demonstrations and competitions using the autonomous service robots Care-O-Bot III and youBot, we identified four different fault classes caused by disturbances, imperfect perception, inadequate planning operator or chaining of action sequences. This thesis then presents two approaches to handle external faults caused by insufficient knowledge about the preconditions of the planning operator. The first approach presents reasoning on detected external faults using knowledge about naive physics. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties in order to reason. By interpreting the reasoning results the robot identifies the information about the situations which can cause the fault. Applying this approach to simple manipulation tasks like picking and placing objects show that naive physics holds great possibilities for reasoning on unknown external faults in robotics. The second approach includes missing knowledge about the execution of an action through learning by experimentation. Firstly, it investigates such representation of execution specific knowledge that can be learned for one particular situation and reused for situations which deviate from the original. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. This representation is called action execution model (AEM) here. The approach provides a learning strategy which uses a physical simulation for generating the training data to learn both symbolic and geometric aspects of the model. The experimental analysis, performed on two physical robots, shows that AEM can reliably describe execution specific knowledge and thereby serving as a potential model for avoiding the occurrence of external faults.
Miscanthus bietet als nachwachsende Industrie- und Energiepflanze zahlreiche Vorteile, die neben den direkten landwirtschaftlichen Anwendungen wie Verbrennung und Tiereinstreu auch eine stoffliche Nutzung im chemischen Bereich zulassen. Als C4-Pflanze mit gesteigerter Photosynthese-Aktivität weist Miscanthus zudem eine hohe CO2-Fixierrate auf. Aufgrund des geringen Kultivierungsaufwandes sowie der hohen Erträge bietet sich Miscanthus als ausgesprochen attraktiver Rohstoff für die Produktion erneuerbarer Kraftstoffe und Chemikalien an, welche mittels thermo-chemischer Umwandlung gewonnen werden.
Gegenstand dieser Arbeit sind Untersuchungen zur Detektion von verpackten Gefahrstoffen wie beispielsweise Explosivstoffen. Hierzu wird in einem ersten Schritt die Verpackung mittels Laserbohrens durchdrungen, um anschließend den nun freiliegenden Gefahrstoff nachweisen zu können. Dies geschieht einerseits durch eine lasergestützte Probenahme und anschließende Detektion mit gängigen chemisch-analytischen Verfahren sowie direkt bei der Wechselwirkung zwischen Laser und Gefahrstoff mittels Ramanspektroskopie. Zudem werden schnelle in situ-Techniken im Hinblick auf ihre Eignung zur Überwachung des Laserbohrprozesses untersucht. Hier werden kostengünstige und kompakte Sensortechniken (Messung der Prozessgase durch Halbleitergassensoren, Messung des Luftschalls mittels Kondensatormikrofon) mit aufwendigeren und komplexeren spektroskopischen Verfahren (Plasma- und Ramanspektroskopie) bewertend verglichen. Anhand ausgewählter Modellsysteme in verkleinertem Maßstab werden die unterschiedlichen Verfahren unter Verwendung gängiger Verpackungs- und Hüllenmaterialien sowie anhand ausgewählter Explosivstoffe charakterisiert. Für das Laserverfahren kommen gepulste Nd:YAG Laser mit unterschiedlichen Emissionswellenlängen zum Einsatz.
The design of an efficient digital circuit in term of low-power has become a very challenging issue. For this reason, low-power digital circuit design is a topic addressed in electrical and computer engineering curricula, but it also requires practical experiments in a laboratory. This PhD research investigates a novel approach, the low-power design laboratory system by developing a new technical and pedagogical system. The low-power design laboratory system is composed of two types of laboratories: the on-site (hands-on) laboratory and the remote laboratory. It has been developed at the Bonn-Rhine-Sieg University of Applied Sciences to teach low-power techniques in the laboratory. Additionally, this thesis contributes a suggestion on how the learning objectives can be complemented by developing a remote system in order to improve the teaching process of the low-power digital circuit design. This laboratory system enables online experiments that can be performed using physical instruments and obtaining real data via the internet. The laboratory experiments use a Field Programmable Gate Array (FPGA) as a design platform for circuit implementation by students and use image processing as an application for teaching low-power techniques.
This thesis presents the instructions for the low-power design experiments which use a top-down hierarchical design methodology. The engineering student designs his/her algorithm with a high level of abstraction and the experimental results are obtained and measured at a low level (hardware) so that more information is available to correctly estimate the power dissipation such as specification, latency, thermal effect, and technology used. Power dissipation of the digital system is influenced by specification, design, technology used, as well as operating temperature. Digital circuit designers can observe the most influential factors in power dissipation during the laboratory exercises in the on-site system and then use the remote system to supplement investigating the other factors. Furthermore, the remote system has obvious benefits such as developing learning outcomes, facilitating new teaching methods, reducing costs and maintenance, cost-saving by reducing the numbers of instructors, saving instructor time and simplifying their tasks, facilitating equipment sharing, improving reliability, and finally providing flexibility of usage the laboratories.
Im Rahmen der vorliegenden wissenschaftlichen Arbeit wurde das Potenzial der einfachen Halbleitergassensoren zum Einsatz in komplexen Fragestellungen erforscht. Ein im wahrsten Sinne des Wortes brandaktuelles Thema, das hier in den Fokus geraten ist, ist die Detektion explosionsfähiger Substanzen. 42547 – so hoch war die Anzahl der Terroranschläge im Zeitraum 2000 bis 2016, die unter Einsatz von energetischen Materialien begangen wurden. Bei mehr als der Hälfte waren Menschenopfer zu beklagen. Terrorismus ist eine Gefahr und neue explosionsfähige Stoffmischungen, deren Analysedaten in keiner Datenbank eines Detektors enthalten sind, bilden zurzeit ein enormes Bedrohungspotential - solche Gefahrstoffe sind mit etablierten bibliothekgestützten Verfahren schwer nachweisbar. In dieser Arbeit wurde ein bibliothekfrei arbeitender Detektor entwickelt, der schnell und verlässlich die Explosionsfähigkeit unbekannter Substanzen anhand der Auswertung ihrer Reaktionsverläufe bewerten konnte. Es wurde gezeigt, dass der Einsatz von Halbleitergassensoren in Kombination mit Photodioden und einem Drucksensor unter Voraussetzung der durchdachten Reaktionsführung und Anwendung von auf die Aufgabenstellung zugeschnittenen Auswertealgorithmen zielführend ist und eine extrem hohe Detektionsrate von 99,8% ermöglicht. Des Weiteren wurde ein einfacher Herstellungsweg für Halbleitergassensoren ausgehend von der vorhandenen Precursorbibliothek gefunden, der in Zukunft gezielte Manipulation der sensorischen Eigenschaften der Halbleitergassensoren durch Variieren des eingesetzten Precursors sowie der Sensorherstellungsparameter erlaubt. Die auf diesem Weg gefertigten Sensoren wurden in den entwickelten Detektor integriert und zeigten großes Potential neben bibliothekfreier Einschätzung der Explosionsfähigkeit einer unbekannten Substanz auch Aussagen über deren Identität treffen zu können.
Diese Arbeit beschäftigt sich mit der Effizienz der Seitenkanal-Kryptanalyse. In Teil II dieser Arbeit demonstrieren wir, wie die Laufzeit der wichtigsten Analysewerkzeuge mit Hilfe der CUDA Plattform erheblich gesteigert werden kann. Zweitens untersuchen wir neue Ansätze der profilierenden Seitenkanal-Kryptanalyse. Der Forschungszweig des maschinellen Lernens kann für deutliche Verbesserungen adaptiert werden, wurde jedoch wenig dahingehend untersucht. In Teil III dieser Arbeit präsentieren wir zwei neue Methoden, die einige Gemeinsamkeiten jedoch auch einige Unterschiede aufbieten, sodass sich Prüfergebnisse in einem vollständigeren Bild zeigen lassen. Darüber hinaus schlagen wir in Teil IV eine Seitenkanalanwendung zum Schutz geistigen Eigentums (IP) vor. In Teil V beschäftigen wir uns tiefergehend mit praktischer Seitenkanal-Kryptanalyse, indem wir Attacken auf einen Sicherheitsmikrokontroller durchführen, der Anwendung in einer, in Deutschland weit verbreiteten, EC Karte findet.
In dieser Arbeit werden neuartige methodische Erweiterungen der Lattice-Boltzmann-Methode (LBM) entwickelt, die effizientere Simulationen inkompressibler Wirbelströmungen ermöglichen. Diese Erweiterungen beheben zwei Hauptprobleme der Standard-LBM: ihre Instabilität in unteraufgelösten turbulenten Simulationen und ihre Beschränkung auf reguläre Rechengitter. Dazu wird zunächst eine Pseudo-Entropische Stabilisierung (PES) entwickelt. Diese kombiniert Ansätze der Multiple-Relaxation-Time (MRT)-Modelle und der Entropischen LBM zu einem expliziten, lokalen und flexiblen Stabilisierungsoperator. Diese Modifikation des Kollisionsschritts erlaubt selbst auf stark unteraufgelösten Gittern stabile und qualitativ korrekte Simulationen. Zur Erweiterung der LBM auf irreguläre Rechengitter wird zunächst eine moderne Discontinuous-Galerkin-LBM untersucht und um stabilere Zeitintegratoren ergänzt. Diese Studie demonstriert die drastischen Schwächen existierender LBMAnsätze auf irregulären Gittern. Basierend auf den gewonnenen Erkenntnissen gelingt die Formulierung einer neuartigen Semi-Lagrangeschen LBM (SLLBM). Diese ermöglicht in einzigartigerWeise sowohl die Verwendung irregulärer Gitter und großer Zeitschritte als auch eine hohe räumliche Konvergenzordnung. Anhand von Beispielsimulationen wird demonstriert, wieso dieser Ansatz anderen aktuellen Off-Lattice-Boltzmann-Methoden (OLBMs) in Effizienz und Genauigkeit überlegen ist. Weitere neuartige Aspekte dieser Arbeit sind die Entwicklung eines modularen Off-Lattice-Boltzmann-Codes und die Erweiterung der LBM um implizite Mehrschrittverfahren, mit denen eine Erhöhung der zeitlichen Konvergenzordnung gelingt.
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
The knowledge of Software Features (SFs) is vital for software developers and requirements specialists during all software engineering phases: to understand and derive software requirements, to plan and prioritize implementation tasks, to update documentation, or to test whether the final product correctly implements the requested SF. In most software projects, SFs are managed in conjunction with other information such as bug reports, programming tasks, or refactoring tasks with the aid of Issue Tracking Systems (ITSs). Hence ITSs contains a variety of information that is only partly related to SFs. In practice, however, the usage of ITSs to store SFs comes with two major problems: (1) ITSs are neither designed nor used as documentation systems. Therefore, the data inside an ITS is often uncategorized and SF descriptions are concealed in rather lengthy. (2) Although an SF is often requested in a single sentence, related information can be scattered among many issues. E.g. implementation tasks related to an SF are often reported in additional issues. Hence, the detection of SFs in ITSs is complicated: a manual search for the SFs implies reading, understanding and exploiting the Natural Language (NL) in many issues in detail. This is cumbersome and labor intensive, especially if related information is spread over more than one issue. This thesis investigates whether SF detection can be supported automatically. First the problem is analyzed: (i) An empirical study shows that requests for important SFs reside in ITSs, making ITSs a good tar- get for SF detection. (ii) A second study identifies characteristics of the information and related NL in issues. These characteristics repre- sent opportunities as well as challenges for the automatic detection of SFs. Based on these problem studies, the Issue Tracking Software Feature Detection Method (ITSoFD), is proposed. The method has two main components and includes an approach to preprocess issues. Both components address one of the problems associated with storing SFs in ITSs. ITSoFD is validated in three solution studies: (I) An empirical study researches how NL that describes SFs can be detected with techniques from Natural Language Processing (NLP) and Machine Learning. Issues are parsed and different characteristics of the issue and its NL are extracted. These characteristics are used to clas- sify the issue’s content and identify SF description candidates, thereby approaching problem (1). (II) An empirical study researches how issues that carry information potentially related to an SF can be detected with techniques from NLP and Information Retrieval. Characteristics of the issue’s NL are utilized to create a traceability network vii of related issues, thereby approaching problem (2). (III) An empirical study researches how NL data in issues can be preprocessed using heuristics and hierarchical clustering. Code, stack traces, and other technical information is separated from NL. Heuristics are used to identify candidates for technical information and clustering improves the heuristic’s results. The technique can be applied to support components, I. and II.
During the last 50 years, a broad range of visible light curing resin based composites (VLC RBC) was developed for restorative applications in dentistry. Correspondingly, the technologies of light curing units (LCU) have changed from UV to visible blue light, and there from quartz tungsten halogen over plasma arc to LED LCUs increasing their light intensity significantly. In this thesis, the influence of the curing conditions in terms of irradiance, exposure time and irradiance distribution of LCU on reaction kinetics as well as corresponding mechanical and viscoelastic properties were investigated.
Over the last 50 years, the controlled motion of robots has become a very mature domain of expertise. It can deal with all sorts of topologies and types of joints and actuators, with kinematic as well as dynamic models of devices, and with one or several tools or sensors attached to the mechanical structure. Nevertheless, the domain has not succeeded in standardizing the modelling of robot devices (including such fundamental entities as “reference frames”!), let alone the semantics of their motion specification and control. This thesis aims to solve this long-standing problem, from three different sides: semantic models for robot kinematics and dynamics, semantic models of all possible motion specification and control problems, and software that can support the latter while being configured by a systematic use of the former.
Lignin ist bereits ein intensives Gebiet der Forschung, allerdings werden Verknüpfungen zwischen Quelle, Aufschlussmethode und Einsatz in der Literatur kaum beschrieben. In der vorliegenden Arbeit werden Lignine von verschiedenen Quellen (Weizenstroh, Buche, Nadelholz) und Aufschlussmethoden (AFEX, Wasserdampfaufschluss, Organosolv, Saure Hydrolyse) analytisch erfasst und hinsichtlich ihres Einsatzes in polymeren Materialien charakterisiert. Eine breite Auswahl an Methoden wurden eingesetzt, FT-IR- Spektroskopie, UV-Vis, 31P-NMR, GPC, Pyrolyse-GC/MS, sowie HPLC zur Bestimmung der Reinheit gemäß des NREL-Standard-Protokolls. Thermische Analysen, wie TGA und DSC zeigten Glasübergangstemperaturen um 120°C, sowie Zersetzungstemperaturen zwischen 340°C und 380°C. Die Ergebnisse weisen für das Organosolv-Buchenholz-Lignin hochreine Fraktionen auf, die bis dato noch nicht erreicht wurden. Die Ergebnisse dieser Arbeit identifizien die Organosolv-Buchenholz-Lignine als ein verwertbares Produkt im Hinblick auf die Anwendung in Polyurethanen sowie Phenol-Formaldehydharzen.
In this doctoral thesis the curing process of visible light-curing (VLC) dental composites and 3D printing rapid prototyping (RP) materials are investigated with the focus on dielectric analysis (DEA). This method is able to monitor the curing of resins in an alternating electric fringe field with adjustable frequencies and is often used for cure control of composites manufacturing in the aviation and automotive industry but hardly established in dental science or RP method development. It is capable of investigating very fast initiation and primary curing processes using high frequencies in the kHz-range. The aim of the Thesis is a better understanding of the curing processes with respect to curing parameters such as resin composition, viscosity, temperature, and for light-curing composites also light intensity and irradiation depth. Due to the nature of both dental and RP systems an application of specific experimental set-up had to be designed allowing for the generation of reproducible and valid results. Subsequently, different evaluation methods were developed to characterize the curing behavior of both material types. A special focus was paid to the determination of kinetic parameters from DEA measurements. Reaction rates of the curing of the corresponding thermosets were calculated and applied to the ion viscosity curves measured by DEA to evaluate reaction kinetic parameters. For the dental composites it could be clearly shown that the initial curing rate is directly proportional to light intensity and not to its square root as proposed by many others authors. A good description of the curing behaviour of 3DP RP materials was also achieved assuming a reaction order smaller than one. This data provides the base for the kinetic modeling of polymerization and curing processes proposed within the Thesis.
During space missions astronauts suffer from cardiovascular deconditioning, when they are exposed to microgravity conditions. Until now, no specific drugs are available for effective countermeasures, since the underlying mechanism is not completely understood. Endothelial cells (ECs) and smooth muscle cells (SMCs) play crucial roles in a variety of cardiovascular functions, many of which are regulated via P2 receptors. However, their function in ECs and SMCs under microgravity condition is still unknown. In this study, ECs and SMCs were isolated from bovine aorta and differentiated from human mesenchymal stem cells (hMSCs), respectively. Subsequently, the cells were verified based on specific markers. An altered P2 receptor expression pattern was detected during the commitment of hMSC towards ECs and SMCs. The administration of natural and artificial P2 receptor agonists and antagonists directly affected the differentiation process. By using EC growth medium as conditioned medium, a vessel cell model was created to culture SMCs and vice versa. Within this study, we were able to show for the first time that the expression of some P2 receptors were altered in ECs and SMCs grown for 24h under simulated microgravity conditions. On the other hand, in some P2 receptor expressions such as P2X7 conditioned medium compensated this change.
In conclusion, our data show that P2 receptors play an important functional role in hMSC differentiation towards ECs and SMCs. Since some P2 receptor artificial ligands are already used as drugs for patients with cardiovascular diseases, it is reasonable to assume that in the future they might be promising candidates for treating cardiovascular deconditioning.