Graduierteninstitut
Refine
H-BRS Bibliography
- yes (68)
Departments, institutes and facilities
- Graduierteninstitut (68)
- Fachbereich Angewandte Naturwissenschaften (25)
- Fachbereich Informatik (21)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (8)
- Fachbereich Wirtschaftswissenschaften (7)
- Fachbereich Ingenieurwissenschaften und Kommunikation (5)
- Institut für Sicherheitsforschung (ISF) (4)
- Institut für Verbraucherinformatik (IVI) (4)
- Institute of Visual Computing (IVC) (4)
- Institut für Cyber Security & Privacy (ICSP) (3)
Document Type
Year of publication
Keywords
- Lignin (3)
- Virtual Reality (3)
- Antioxidans (2)
- Evolutionary optimization (2)
- Gesundheitsförderung (2)
- Gitter-Boltzmann-Methode (2)
- Human-Computer Interaction (2)
- Nachhaltigkeit (2)
- Quality diversity (2)
- Robotics (2)
Maschinen und Atmosphären
(2025)
In welchem Zusammenhang steht ›das Virtuelle‹ mit Poetiken, die in der Romantik konturiert wurden? Andreas Sieß zeigt, dass sich die ästhetischen Vorstellungen dessen, was ›das Virtuelle‹ ist, nicht nur bereits um 1800 konsolidierten, sondern dass die (bild-)ästhetischen Maßstäbe, die heute grundlegend für moderne Anwendungen der Virtual Reality sind, bereits damals Gegenstand von Aushandlungen waren. Anhand der Begriffe ›Maschine‹ und ›Atmosphäre‹ verhandelt er zwei gegenläufige Stoßrichtungen des Virtuellen, deren dialektisches Spiel eine neue Perspektive auf Fragestellungen zu der Gestaltung von gegenwärtigen virtuellen Medien anbietet.
In der vorliegenden Arbeit werden die nematischen Flüssigkristallgemische (E7 und E8) zum Zwecke der Gassensorik mit einer reaktiven, optisch aktiven Substanz dotiert. Die Dotierung verursacht die Ausbildung einer chiral-nematischen Phase, die einen eindimensionalen photonischen Kristall mit Reflexionsmaxima im sichtbaren Bereich des elektromagnetischen Spektrums erzeugt. Infolge einer chemischen Reaktion des Dotiermittels mit dem einem Analyten, ändert sich mit seiner chemischen Zusammensetzung auch dessen helical twisting power (HTP). Diese Änderung verursacht eine Verschiebung des reflektierten Wellenlängenbereichs, was als Änderung der farblichen Erscheinung mit dem bloßen Auge wahrgenommen werden kann. In dieser Arbeit wird das koaxiale Elektrospinnen verwendet, um Flüssigkristalle in Polymerfasern von wenigen Mikrometern Durchmesser einzukapseln. Der Vergleich zwischen eingekapseltem und nicht eingekapseltem dotierten Flüssigkristall wird mit einer dafür entwickelten Reaktionskammer UV/VIS-spektroskopisch durchgeführt. Die ablaufenden Reaktionen werden mittels FTIR-Spektroskopie untersucht. Die Fasern und die verwendeten Flüssigkristalle werden lichtmikroskopisch charakterisiert. Es werden zusätzlich Möglichkeiten untersucht die Wasserbeständigkeit der hergestellten Fasern zu verbessern, um ihre Eignung für künftige technische Anwendungen zu steigern. Hierzu wird das triaxiale Elektrospinnen verwendet, um die Fasern mit einer zusätzlichen wasserbeständigen Polymerhülle zu überziehen. Es wird zudem die Möglichkeit untersucht koaxial gesponnene Fasern nachträglich zu vernetzen, um so eine Wasserfestigkeit zu erzielen.
Process-induced changes in thermo-mechanical viscoelastic properties and the corresponding morphology of biodegradable polybutylene adipate terephthalate (PBAT) and polylactic acid (PLA) blown film blends modified with four multifunctional chain-extending cross-linkers (CECL) were investigated. The introduction of CECL modified the properties of the reference PBAT/PLA blend significantly. The thermal analysis showed that the chemical reactions were incomplete after compounding, and that film blowing extended them. SEM investigations of the fracture surfaces of blown extrusion films reveal the significant effect of CECL on the morphology formed during the processing. The anisotropic morphology introduced during film blowing proved to affect the degradation processes as well. Furthermore, the reactions of CECL with PBAT/PLA induced by the processing depend on the deformation directions. The blow-up ratio parameter was altered to investigate further process-induced changes proving synergy with mechanical and morphological features. Using blown film extrusion, the elongational behavior represents a very important characteristic. However, its evaluation may be quite often problematic, but with the SER Universal Testing Platform it was possible to determine changes in the duration of time intervals corresponding to the rupture of elongated samples.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
In recent years, eXtended Reality (XR) technology like Augmented Reality and Virtual Reality became both technically feasible as well as affordable which lead to a drastic demand of professionally designed and developed applications. However, this demand combined with a rapid pace of innovation revealed a lack of design tool support for professional interaction designers as well as a knowledge gap regarding their approaches and needs. To address this gap, this thesis engages with the work of professional XR interaction designers in a qualitative research into XR interaction design approach. Therefore, this thesis applies two complementary lenses stemming from scientific design and social practice theory discourses to observe, describe, analyze, and understand professional XR interaction designers' challenges and approaches with a focus on application prototyping.
The art of nudging
(2023)
Do simple and subtle changes in the living and study environment improve the eating behaviour of students in an educational setting? This dissertation provides a not-so-simple answer to this simple question based on the outcomes of four studies that explore the effects and design of artwork nudges (specifically the artwork of Alberto Giacometti) on the eating behaviour of students by applying different research designs. Study 1 explores the effects of a Giacometti-like nudge (a more contemporary version of the original nudge) regarding the dietary behaviour of high school students in a controlled setting. Study 2 applies different artwork nudges within a virtual vignette setting to measure their effects on virtual meal choices made. Also, the degree to which individuals were aware of the nudge’s presence is included as an influential factor in nudge effectiveness. Study 3 assesses the susceptibility to nudges as measured with a questionnaire. Susceptibility to nudges is defined as nudgeability. Study 4 assesses the effects of the original Giacometti nudge in a real-world university cafeteria setting. Specifically, the immediate and sustained effects of the original Giacometti nudge on students’ meal purchases in the university cafeteria are considered. In addition, the role of awareness of the nudge’s presence as well as the acceptance of this specific nudge are discussed. The conclusion is drawn that the original Giacometti nudge should only be applied in an educational setting to improve healthy eating behaviour if the intended target groups and environment meet certain conditions. Artwork nudges in general should be applied only after rigorous testing of various types of different nudges and more research reflecting healthy eating in its entirety.
Lignin ist ein aromatisches Biopolymer, das in den Zellwänden von Pflanzen vorkommt. Es ist hauptsächlich aus drei sogenannten Monolignolen (p-Hydroxyphenyl (H), Guajakol (G) und Syringol (S)) aufgebaut, die über verschiedene Bindungen miteinander verknüpft sein können, und enthält eine Vielzahl an funktionellen Gruppen. Interessant für die Verwendung von Lignin sind dabei insbesondere die vielen phenolischen Hydroxygruppen, die als Ausgangsstoff bei der Synthese neuer Produkte dienen können, daneben aber auch für seine antioxidativen Eigenschaften verantwortlich sind. Da Struktur und Eigenschaften von vielen Faktoren wie Biomasse und Aufschlussprozess abhängen, ist eine detaillierte Charakterisierung der Lignine nötig, um Struktur-Eigenschafts-Beziehungen aufzuklären und so einen Schritt näher an eine mögliche stoffliche Nutzung zu kommen. Mit dieser Arbeit soll der Einfluss der Biomasse inklusive der verwendeten Partikelgröße sowie des Organosolv-Aufschlussprozesses auf die Monomerzusammensetzung, das Molekulargewicht und die Antioxidanz der isolierten Lignine untersucht werden.
Als Rohstoffe zur Ligningewinnung dienen die drei mehrjährigen lignocellulosereichen Low-Input-Pflanzen Miscanthus x giganteus, Silphium perfoliatum und Paulownia tomentosa, die momentan hauptsächlich zur Energiegewinnung genutzt werden. Im Rahmen der Bioökonomiestrategie der Europäischen Union soll der Schwerpunkt zukünftiger Bioraffinerien jedoch auf eine ganzheitliche Nutzung von Biomassen gelegt und so auch die stoffliche Nutzung fokussiert werden. Zusätzlich zu diesen drei Pflanzen werden auch Organosolv-Lignine aus den in der Literatur bereits gut beschriebenen Biomassen Weizenstroh und Buchenholz isoliert, und zwei Nadelholz-Kraft-Lignine als Vergleich herangezogen. Die Ergebnisse zeigen, dass die Art der Biomasse hauptsächlich die Monomerzusammensetzung beeinflusst: Gräser bestehen aus allen drei Monolignolen, Laubhölzer mehrheitlich aus S- und G-Einheiten, während Nadelhölzer nur aus G-Einheiten aufgebaut sind. Die Holzlignine besitzen zudem höhere Molekulargewichte sowie bessere antioxidative Eigenschaften als die Gras- und Krautlignine. Mit der feineren Vermahlung der Biomasse kann die Monomerzusammensetzung beeinflusst werden: der Einsatz kleinerer Partikelgrößen führt zu Ligninen mit einem höheren Gehalt an H-Einheiten, sowohl für Miscanthus als auch für Paulownia. Außerdem kann bei Paulownia die Ausbeute gesteigert und eine Zunahme des Molekulargewichtes beobachtet werden, wenn die kleinste Siebfraktion für den Organosolv-Aufschluss verwendet wird. Einen größeren Einfluss als der Mahlgrad der Biomasse haben die Autohydrolyse sowie der Organosolv-Aufschlussprozess selbst. Die Monomerzusammensetzung ändert sich aufgrund derselben Biomasse zwar kaum, die Bindungstypen zwischen den Monolignolen dagegen schon. Mit höherer Prozessstärke (Zeit, Temperatur, Ethanol-Konzentration) werden Etherbindungen gespalten, was den Anteil an phenolischen Hydroxygruppen und somit die Antioxidanz erhöht. Neben dieser Depolymerisation werden partiell auch Rekondensationsreaktionen beobachtet.
Die erzielten Ergebnisse liefern einen Beitrag zum Verständnis des Zusammenhangs zwischen Ligninquelle und -gewinnung mit der daraus resultierenden Ligninstruktur und Antioxidanz und bieten damit eine Grundlage für den Wandel von der energetischen hin zu einer nachhaltigen stofflichen Nutzung dieses nachwachsenden Biopolymers. Gerade über die Wahl der Aufschlussparameter können Struktur und Antioxidanz gezielt beeinflusst werden, was in zukünftigen Studien weiter fokussiert werden sollte.
Microorganisms not only contribute to the spoilage of food but can also cause illnesses through consumption. Consumer concerns and doubts about the shelf life of the products and the resulting enormous amounts of food waste have led to a demand for a rapid, robust, and non-destructive method for the detection of microorganisms, especially in the food sector. Therefore, a rapid and simple sampling method for the Raman- and infrared (IR)-microspectroscopic study of microorganisms associated with spoilage processes was developed. For subsequent evaluation pre-processing routines, as well as chemometric models for classification of spoilage microorganisms were developed. The microbiological samples are taken using a disinfectable sampling stamp and measured by microspectroscopy without the usual pre-treatments such as purification separation, washing, and centrifugation. The resulting complex multivariate data sets were pre-processed, reduced by principal component analysis, and classified by discriminant analysis. Classification of independent unlabeled test data showed that microorganisms could be classified at genus, species, and strain levels with an accuracy of 96.5 % (Raman) and 94.5 % (IR), respectively, despite large biological differences and novel sampling strategies. As bacteria are exposed to constantly changing conditions and their adaptation mechanisms may make them inaccessible to conventional measurement methods, the methods and models developed were investigated for their suitability for microorganisms exposed to stress. Compared to normal growth conditions, spectral changes in lipids, polysaccharides, nucleic acids, and proteins were observed in microorganisms exposed to stress. Models were developed to discriminate microorganisms, independent of the involvement of various stress factors and storage times. Classification of the investigated bacteria yielded accuracies of 97.6 % (Raman) and 96.6 % (IR), respectively, and a robust and meaningful model was developed to discriminate different microorganisms at the genus, species, and strain levels. The obtained results are very promising and show that the methods and models developed for the discrimination of microorganisms as well as the investigation of stress factors on microorganisms by means of Raman- and IR-microspectroscopy have the potential to be used, for example, in the food sector for the rapid determination of surface contamination.
Eine Überprüfung der Leistungsentwicklung im Radsport geht bis heute mit der Durchführung einer spezifischen Leistungsdiagnostik unter Verwendung vorgegebener Testprotokolle einher. Durch die zwischenzeitlich stark gestiegene Popularität von »wearable devices« ist es gleichzeitig heutzutage sehr einfach, die Herzfrequenz im Alltag und bei sportlichen Aktivitäten aufzuzeichnen. Doch eine geeignete Modellierung der Herzfrequenz, die es ermöglicht, Rückschlüsse über die Leistungsentwicklung ziehen zu können, fehlt bislang. Die Herzfrequenzaufzeichnungen in Kombination mit einer phänomenologisch interpretierbaren Modellierung zu nutzen, um auf möglichst direkte Weise und ohne spezifische Anforderungen an die Trainingsfahrten Rückschlüsse über die Leistungsentwicklung ziehen zu können, bietet die Chance, sowohl im professionellen Radsport wie auch in der ambitionierten Radsportpraxis den Erkenntnisgewinn über die eigene Leistungsentwicklung maßgeblich zu vereinfachen. In der vorliegenden Arbeit wird ein neuartiges und phänomenologisch interpretierbares Modell zur Simulation und Prädiktion der Herzfrequenz beim Radsport vorgestellt und im Rahmen einer empirischen Studie validiert. Dieses Modell ermöglicht es, die Herzfrequenz (sowie andere Beanspruchungsparameter aus Atemgasanalysen) mit adäquater Genauigkeit zu simulieren und bei vorgegebener Wattbelastung zu prognostizieren. Weiterhin wird eine Methode zur Reduktion der Anzahl der kalibrierbaren freien Modellparameter vorgestellt und in zwei empirischen Studien validiert. Nach einer individualisierten Parameterreduktion kann das Modell mit lediglich einem einzigen freien Parameter verwendet werden. Dieser verbleibende freie Parameter bietet schließlich die Möglichkeit, im zeitlichen Verlauf mit dem Verlauf der Leistungsentwicklung verglichen zu werden. In zwei unterschiedlichen Studien zeigt sich, dass der freie Modellparameter grundsätzlich in der Lage zu sein scheint, den Verlauf der Leistungsentwicklung über die Zeit abzubilden.
In dieser Arbeit wird eine kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode neu entwickelt und erprobt. Die Lattice-Boltzmann-Methode ist ein Verfahren zur numerischen Strömungssimulation, das auf einer Modellierung von Partikeldichten und deren Interaktion untereinander basiert. In ihrer Ursprungsform ist die Methode jedoch auf schwach kompressible Strömungen mit niedriger Machzahl beschränkt. Wesentliche Nachteile der bisherigen Versuche zur Erweiterung auf supersonische Strömungen sind entweder mangelhafte Stabilität der Verfahren, unpraktikabel große Geschwindigkeitssätze oder die Beschränktheit auf kleine Zeitschrittweiten. Als Alternative zu bisherigen Ansätzen wird in dieser Arbeit ein Semi-Lagrangescher Strömungsschritt eingesetzt. Semi-Lagrangesche Verfahren entkoppeln mittels Interpolation die Orts-, Zeit- und Geschwindigkeitsdiskretisierung der ursprünglichen Lattice-Boltzmann-Methode. Nach der Einleitung wird im zweiten und dritten Kapitel dieser Arbeit zunächst auf die Grundlagen und Prinzipien der Lattice-Boltzmann-Methode eingegangen sowie bisherige Ansätze zur Simulation kompressibler Strömungen aufgeführt. Im Anschluss wird die kompressible Semi-Lagrangesche Lattice-Boltzmann-Methode entwickelt und beschrieben. Die Erweiterung erfolgt im Wesentlichen durch die Verknüpfung der Methode mit geeigneten Gleichgewichtsfunktionen und Geschwindigkeitssätzen. Im vierten Kapitel der Arbeit werden neue Kubatur-basierte Geschwindigkeitssätze entwickelt und getestet, darunter ein D3Q45-Geschwindigkeitssatz zur Berechnung kompressibler Strömungen, der den Rechenaufwand gegenüber konventionellen Geschwindigkeitsdiskretisierungen erheblich verringert. Im fünften Kapitel der Arbeit werden zur Validierung Simulationen von eindimensionalen Stoßrohren, zweidimensionalen Riemann-Problemen und Stoß-Wirbel-Interaktionen durchgeführt. Im Anschluss zeigen Simulationen von dreidimensionalen, kompressiblen Taylor-Green-Wirbeln sowie von wandgebundenen Testfällen die Vorteile der Methode für kompressible Strömungssimulationen. Zu diesem Zweck werden die Überschallströmung um ein zweidimensionales NACA-0012-Profil und um eine dreidimensionale Kugel sowie eine supersonische Kanalströmung untersucht. Dem Simulationsteil folgt eine umfangreiche Diskussion der Semi-Lagrangeschen Lattice-Boltzmann-Methode im Vergleich zu anderen Methoden. Die Vorteile der Methode, wie vergleichsweise große Zeitschrittweiten, körperangepasste Netze und die Stabilität der Methode, werden hier herausgearbeitet.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
Risikobasierte Authentifizierung (RBA) ist ein adaptiver Ansatz zur Stärkung der Passwortauthentifizierung. Er überwacht eine Reihe von Merkmalen, die sich auf das Loginverhalten während der Passworteingabe beziehen. Wenn sich die beobachteten Merkmalswerte signifikant von denen früherer Logins unterscheiden, fordert RBA zusätzliche Identitätsnachweise an. Regierungsbehörden und ein Erlass des US-Präsidenten empfehlen RBA, um Onlineaccounts vor Angriffen mit gestohlenen Passwörtern zu schützen. Trotz dieser Tatsachen litt RBA unter einem Mangel an offenem Wissen. Es gab nur wenige bis keine Untersuchungen über die Usability, Sicherheit und Privatsphäre von RBA. Das Verständnis dieser Aspekte ist jedoch wichtig für eine breite Akzeptanz.
Diese Arbeit soll ein umfassendes Verständnis von RBA mit einer Reihe von Studien vermitteln. Die Ergebnisse ermöglichen es, datenschutzfreundliche RBA-Lösungen zu schaffen, die die Authentifizierung stärken bei gleichzeitig hoher Menschenakzeptanz.
Intelligent virtual agents provide a framework for simulating more life-like behavior and increasing plausibility in virtual training environments. They can improve the learning process if they portray believable behavior that can also be controlled to support the training objectives. In the context of this thesis, cognitive agents are considered a subset of intelligent virtual agents (IVA) with the focus on emulating cognitive processes to achieve believable behavior. The complexity of employed algorithms, however, is often limited since multiple agents need to be simulated in real-time. Available solutions focus on a subset of the indicated aspects: plausibility, controllability, or real-time capability (scalability). Within this thesis project, an agent architecture for attentive cognitive agents is developed that considers all three aspects at once. The result is a lightweight cognitive agent architecture that is customizable to application-specific requirements. A generic trait-based personality model influences all cognitive processes, facilitating the generation of consistent and individual behavior. An additional mapping process provides a formalized mechanism to transfer results of psychological studies to the architecture. Personality profiles are combined with an emotion model to achieve situational behavior adaptation. Which action an agent selects in a situation also influences plausibility. An integral element of this selection process is an agent's knowledge about its world. Therefore, synthetic perception is modeled and integrated into the architecture to provide a credible knowledge base. The developed perception module includes a unified sensor interface, a memory hierarchy, and an attention process. With the presented realization of the architecture (CAARVE), it is possible for the first time to simulate cognitive agents, whose behaviors are simultaneously computable in real-time and controllable. The architecture's applicability is demonstrated by integrating an agent-based traffic simulation built with CAARVE into a bicycle simulator for road-safety education. The developed ideas and their realization are evaluated within this work using different strategies and scenarios. For example, it is shown how CAARVE agents utilize personality profiles and emotions to plausibly resolve deadlocks in traffic simulations. Controllability and adaptability are demonstrated in additional scenarios. Using the realization, 200 agents can be simulated in real-time (50 FPS), illustrating scalability. The achieved results verify that the developed architecture can generate plausible and controllable agent behavior in real-time. The presented concepts and realizations provide sound fundamentals to everyone interested in simulating IVA in real-time environments.
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
Im Rahmen dieser Arbeit wurden zunächst neuartige ionische Agarosederivate synthetisiert und anschließend umfassend charakterisiert. Anionische Agarosesulfate mit einer regioselektiven Derivatisierung in Position G6 wurden durch homogene Umsetzung in ionischer Flüssigkeit erhalten. Kationische Agarosecarbamate mit einstellbarem Funktionalisierungsgrad waren durch einen zweistufigen Syntheseansatz zugänglich. Hierzu wurden zunächst Agarosephenylcarbonate in einer homogenen Synthese hergestellt, im Anschluss folgte eine Aminolyse zu den gewünschten funktionalen Agarosederivaten. Die ionischen Agarosederivate waren bereits bei geringen Funktionalisierungsgraden vollständig löslich in Wasser. Damit war es möglich, Alginatmikrokapseln polyelektrolytisch zu beschichten und diese als Träger für eine kontrollierte Wirkstofffreisetzung zu verwenden. Ebenfalls konnten Kompositgele aus Agarose, Hydroxyapatit und Agarosederivaten hergestellt und charakterisiert werden. Im zweiten Teil wurden sowohl die Kompositträgermaterialien als auch die Alginatmikrokapseln mit vier verschiedenen Modellwirkstoffen (ATP, Suramin, Methylenblau und A740003) beladen und die Wirkstofffreisetzung über einen Zeitraum von zwei Wochen untersucht. Für die ionischen Modellwirkstoffe erwiesen sich Kompositträgermaterialien mit ionischem Agarosederivat, die beschichteten Mikrokapseln sowie die Kombination aus Komposit und Kapseln als effektiv, um die Freisetzung auf bis zu 40% zu verlangsamen. Für die schlecht wasserlösliche Substanz A740003, ein Rezeptorligand für die osteogene Differenzierung von Stammzellen, wurde eine stark verzögerte Freisetzung aus Polyelektrolytemikrokapseln festgestellt. Mithilfe von literaturbekannten und neu entwickelten Anpassungsmodellen gelang es, die Diffusion als Hauptmechanismus der Wirkstofffreisetzung zu identifizieren und die Freisetzungskurven mathematisch akkurat zu beschreiben und daraus Rückschlüsse über die einzelnen Phasen der Freisetzung zu ziehen.
Western consumption patterns are strongly associated with environmental pollution and climate change, which challenges us with transforming our society and consumption towards a sustainable future. This thesis takes up this challenge and aims to contribute to this debate at the intersection of ICT artifacts and social practices through the examples of food and mobility consumption. The social practice lens is employed as an alternative to the predominant persuasive or motivational lens of design in the respective consumption domains. Against this background, this thesis first presents three research papers that contribute to a broader understanding of dynamic practices and their transformation towards a sustainable stable state. The following research takes up these sections' empirical results that more intensely focus on the appropriation of materials and infrastructures utilizing Recommender Systems. Given this approach, this thesis contributes to three fields - practice-based Computing, Recommender Systems, and Consumer Informatics.
The human enzymes GLYAT (glycine N-acyltransferase), GLYATL1 (glutamine N-phenylacetyltransferase) and GLYATL2 (glycine N-acyltransferase-like protein 2) are not only important in the detoxification of xenobiotics via the human liver, but are also involved in the elimination of acyl residues that accumulate in the form of their coenzyme A (coA) esters in some rare inborn errors of metabolism. This concerns, for example, disorders in the degradation of branched-chain amino acids, such as isovaleric acidemia or propionic acidemia. In addition, they also assist in the elimination of ammonium, which is produced during the transamination of amino acids and accumulates in urea cycle defects. Sequence variants of the enzymes have also been investigated, which may provide evidence of impaired enzyme activities, from which therapy adjustments can potentially be derived. A modified Escherichia coli strain was chosen for the overexpression and partial biochemical characterization of the enzymes, which may allow solubility and proper folding. Since post-translational protein modifications are very limited in bacteria, we also attempted to overexpress the enzymes in HEK293 cells (human-derived). In addition to characterization via immunoblots and activity assays, intracellular localization of the enzymes was also performed using GFP coupling and confocal laser scanning microscopy in transfected HEK293 cells. The GLYATL2 enzyme may have tasks beyond detoxification and metabolic defects and the preliminary molecular biology work has been performed as part of this project - the enzyme activity determinations were outsourced to a co-supervised bachelor thesis. The enzyme activity determinations with purified recombinant human enzyme from Escherichia coli provided a threefold higher activity of the sequence variant p.(Asn156Ser) for GLYAT, which should be considered as the probably authentic wild type of the enzyme. In addition, a reduced activity of the GLYAT variant p.(Gln61Leu), which is very common in South Africa, was shown, which could be of particular importance in the treatment of isovaleric acidemia, which is also common in South Africa. Intracellularly, GLYAT and GLYATL1 could be localized mitochondrially. As the analyses have shown, sequence variations of GLYAT and GLYATL1 influence their enzyme activity. As an example, the GLYAT variant p.(Gln61Leu) is frequently found in South Africa. In the case of reduced GLYAT activity, patients could be increasingly treated with L-carnitine in the sense of an individualized therapy, since the conjugation of the toxic isovaleryl-coA with glycine is restricted by the GLYAT sequence variation. Activity-reducing variants identified in this project are of particular interest, as they may influence the treatment of certain metabolic defects.
The processing of employee personal data is dramatically increasing. To protect employees' fundamental right to privacy, the law provides for the implementation of privacy controls, including transparency and intervention. At present, however, the stakeholders responsible for putting these obligations into action, such as employers and software engineers, simply lack the fundamental knowledge needed to design and implement the necessary controls. Indeed, privacy research has so far focused mainly on consumer relations in the private context. In contrast, privacy in the employment context is less well studied. However, since privacy is highly context-dependent, existing knowledge and privacy controls from other contexts cannot simply be adopted to the employment context. In particular, privacy in employment is subject to different legal and social norms, which require a different conceptualization of the right to privacy than is usual in other contexts. To adequately address these aspects, there is broad consensus that privacy must be regarded as a socio-technical concept in which human factors must be considered alongside technical-legal factors. Today, however, there is a particular lack of knowledge about human factors in employee privacy. Disregarding the needs and concerns of individuals or lack of usability, though, are common reasons for the failure of privacy and security measures in practice. This dissertation addresses key knowledge gaps on human factors in employee privacy by presenting the results of a total of three in-depth studies with employees in Germany. The results provide insights into employees' perceptions of the right to privacy, as well as their perceptions and expectations regarding the processing of employee personal data. The insights gained provide a foundation for the human-centered design and implementation of employee-centric privacy controls, i.e., privacy controls that incorporate the views, expectations, and capabilities of employees. Specifically, this dissertation presents the first mental models of employees on the right to informational self-determination, the German equivalent of the right to privacy. The results provide insights into employees' (1) perceptions of categories of data, (2) familiarity and expectations of the right to privacy, and (3) perceptions of data processing, data flow, safeguards, and threat models. In addition, three major types of mental models are presented, each with a different conceptualization of the right to privacy and a different desire for control. Moreover, this dissertation provides multiple insights into employees' perceptions of data sensitivity and willingness to disclose personal data in employment. Specifically, it highlights the uniqueness of the employment context compared to other contexts and breaks down the multi-dimensionality of employees' perceptions of personal data. As a result, the dimensions in which employees perceive data are presented, and differences among employees are highlighted. This is complemented by identifying personal characteristics and attitudes toward employers, as well as toward the right to privacy, that influence these perceptions. Furthermore, this dissertation provides insights into practical aspects for the implementation of personal data management solutions to safeguard employee privacy. Specifically, it presents the results of a user-centered design study with employees who process personal data of other employees as part of their job. Based on the results obtained, a privacy pattern is presented that harmonizes privacy obligations with personal data processing activities. The pattern is useful for designing privacy controls that help these employees handle employee personal data in a privacy-compliant manner, taking into account their skills and knowledge, thus helping to protect employee privacy. The outcome of this dissertation benefits a wide range of stakeholders who are involved in the protection of employee privacy. For example, it highlights the challenges to be considered by employers and software engineers when conceptualizing and designing employee-centric privacy controls. Policymakers and researchers gain a better understanding of employees' perceptions of privacy and obtain fundamental knowledge for future research into theoretical and abstract concepts or practical issues of employee privacy. Employers, IT engineers, and researchers gain insights into ways to empower data processing employees to handle employee personal data in a privacy-compliant manner, enabling employers to improve and promote compliance. Since the basic principles underlying informational self-determination have been incorporated into European privacy legislation, we are confident that our results are also of relevance to stakeholders outside Germany.
Remineralizing soils? The agricultural usage of silicate rock powders in the context of One Health
(2022)
The concept of soil health describes the capacity of soil to fulfill essential functions and ecosystem services. Healthy soils are inextricably linked to sustainable agriculture and are crucial for the interconnected health of plants, animals, humans, and their environment ("One Health"). However, soil health is threatened through unprecedented rates of soil degradation. A major form of soil degradation is nutrient depletion, which has been seriously underestimated for potassium (K) and several micronutrients. One way to replenish K and micronutrients are multi-nutrient silicate rock powders (SRPs). Their agronomic suitability has long been questioned due to slow weathering rates, although recent studies found significant soil health improvements and challenge past objections which insufficiently addressed the factorial complexity of the weathering process. Furthermore, environmental co-benefits might arise through their mixture with livestock slurry, which could reduce the slurry’s ammonia (NH3) emissions and improve its biophysicochemical properties. However, neither SRPs effects on soil health, nor the biophysicochemical effects of mixing SRPs with livestock slurry have hitherto been comprehensively analyzed. The overall aim of this dissertation is thus to review the agricultural usage of SRPs in the context of One Health. The first part of this thesis starts with an elaboration of the health concept in general and then explores the interlinkages between soil health and One Health. Subsequently, the potentials and oftentimes bypassed problems of operationalizing soil health will be outlined, and feasible ways for its future usage are proposed. In the second part of the thesis, it is reviewed how and under which circumstances SRPs can ameliorate soil health. This is done by presenting a new framework with the most relevant factors for the usage of SRPs through which several contradictory outcomes of prior studies can be explained. A subsequent analysis of 48 crop trials reveals the potential of SRPs as K and multi-nutrient soil amendment for tropical soils, whereas the benefits for temperate soils are inconclusive. The review revealed various co-benefits that could substantially increase SRPs overall agronomic efficiency. The last part of the thesis reports about the effects of mixing two rock powders with cattle slurry. SRPs significantly increased the slurry´s CH4 emission rates, whereas the effects on NH3, CO2, and N2O emission rates were mostly insignificant. The rock powders increased the nutrient content of the slurry and altered its microbiology. In conclusion, the concept of soil health must be operationalized in more specific, practical, and context-dependent ways. Particularly in humid tropical environments, SRPs could advance low-cost soil health ameliorations, and its usage could have additional co-benefits regarding One Health. Mixing SRPs with organic materials like livestock slurry could overcome the major obstacle of their low solubility, although the effects on NH3 and greenhouse gas emissions must be further evaluated.
Typically, plastic packaging materials are produced using additives, like e.g. stabilisers, to introduce specific desired properties into the material or, in case of stabilisers, to prolong the shelf life of such packaging materials. However, those stabilisers are typically fossil-based and can pose risks to both environmental and human health. Therefore, the present study presents more sustainable alternatives based on regional renewable resources which show the relevant antioxidant, antimicrobial and UV absorbing properties to successfully serve as a plastic stabiliser. In the study, all plants are extracted and characterised with regard to not only antioxidant, antimicrobial and UV absorbing effects, but also with regard to additional relevant information like chemical constituents, molar mass distribution, absorbance in the visible range et cetera. The extraction process is furthermore optimised and, where applicable, reasonable opportunities for waste valorisation are explored and analysed. Furthermore, interactions between analysed plant extracts are described and model films based on Poly-Lactic Acid are prepared, incorporating analysed plant extracts. Based on those model films, formulation tests and migration analysis according to EU legislation is conducted.
The well-known aromatic and medicinal plant thyme (Thymus vulgaris L.) includes phenolic terpenoids like thymol and carvacrol which have strong antioxidant, antimicrobial and UV absorbing effects. Analyses show that those effects can be used in both lipophilic and hydrophilic surroundings, that the variant Varico 3 is a more potent cultivar than other analysed thyme variants, and that a passive extraction setup can be used for extract preparation while distillation of the Essential Oils can be a more efficient approach.
Macromolecular antioxidant polyphenols, particularly proanthocyanidins, have been found in the seed coats of the European horse chestnut (Aesculus hippocastanum L.) which are regularly discarded in phytopharmaceutical industry. In this study, such effects and compounds have been reported for the first time while a valorisation of waste materials has been analysed successfully. Furthermore, a passive extraction setup for waste materials and whole seeds has been developed. In extracts of snowdrops, precisely Galanthus elwesii HOOK.F., high concentrations of tocopherol have been found which promote a particularly high antioxidant capacity in lipophilic surroundings. Different coniferous woods (Abies div., Picea div.) which are in use as Christmas trees are extracted after separating the biomass in leafs and wood parts before being analysed regarding extraction optimisation and drought resistance of active substances. Antioxidant and UV absorbing proanthocyanidins are found even in dried biomasses, allowing the circular use of already used Christmas trees as bio-based stabilisers and the production of sustainable paper as a byproduct.
Telogene Einzelhaare sind häufig vorkommende Spurentypen an Tatorten. Derzeit werden sie zumeist von der STR-Typisierung ausgeschlossen, weil ihre STR-Profile aufgrund geringer DNA-Mengen und starker DNA-Degradierung in vielen Fällen unvollständig und schwierig zu interpretieren sind. In der vorliegenden Arbeit wurde eine systematische Vorgehensweise angewandt, um Korrelationen zwischen der DNA-Menge und DNA-Degradierung zu dem Erfolg der STR-Typisierung aufzuweisen und darauf basierend den Typisierungs-Erfolg von DNA aus Haaren vorhersagen zu können.
Zu diesem Zweck wurde ein human- (RiboD) und ein canin-spezifischer (RiboDog) qPCR-basierter Assay zur Messung der DNA-Menge und Bewertung der DNA-Integrität mittels eines Degradierungswerts (D-Wert) entwickelt. Aufgrund der Lage der genutzten Primer, welche auf ubiquitär vorkommende ribosomale DNA-Sequenzen abzielen, ist das Funktionsprinzip schnell und kostengünstig auf unterschiedliche Spezies anzuwenden. Die Funktionsweise der Assays wurde mittels seriell degradierter DNA bestätigt und der humane Assay wurde im Vergleich zum kommerziellen Quantifiler? Trio DNA Quantification Kit validiert. Schließlich wurde mit den Assays an DNA aus telogenen und katagenen Einzelhaaren von Menschen und Hunden der Zusammenhang zwischen DNA-Menge und DNA-Integrität zu der Vollständigkeit der STR-Allele (Allel Recovery) von DNA-Profilen untersucht, die mittels kapillarelektrophoretischer (CE) STR-Kits erhaltenen wurde. Es zeigte sich, dass bei humanen Einzelhaaren die Allel-Recovery sowohl von der DNA-Menge als auch der DNA-Integrität abhängt. Dagegen war die DNA-Degradierung bei einzelnen Hundehaaren durchweg geringer und die Allel-Recovery hing allein von der extrahierten DNA-Menge ab.
Um die STR-Analytik degradierter humaner DNA-Proben weiter zu verbessern, wurde ein neuartiger NGS-basierter Assay (maSTR, Mini-Amplikon-STR) etabliert, der die 16 forensischen STR-Loci des European Standard Sets und Amelogenin als sehr kurze Amplikons (76-296 bp) parallel amplifiziert. Mit intakter DNA generierte der maSTR-Assay im Mengenbereich von 200 pg eingesetzter DNA reproduzierbare, vollständige Profile ohne Allelic Drop-ins. Bei niedrigeren DNA-Mengen traten vereinzelt Allelic Drop-ins auf, wobei unter Verwendung von mindestens 43 pg DNA vollständige Profile erhalten wurden.
Die kombinierte Strategie aus RiboD-Messungen der DNA-Menge und -Integrität und daraus resultierendem STR-Typisierungserfolg des maSTR-Assays wurde an degradierter DNA validiert. Anschließend wurde die Strategie auf DNA aus telogenen und katagenen Einzelhaaren angewandt und mit den Ergebnissen des CE-basierten PowerPlex? ESX 17-Kits verglichen, das dasselbe STR-Marker-Set analysiert. Dabei zeigte sich, dass der Erfolg der STR-Typisierung beider STR-Assays sowohl von der optimalen Menge der Template-DNA als auch von der DNA-Integrität abhängt. Mit dem maSTR-Assay wurden vollständige Profile mit ungefähr 50 pg Input-DNA für leicht degradierte DNA aus Einzelhaaren nachgewiesen, sowie mit ungefähr 500 pg stark degradierter DNA. Aufgrund der geringen DNA-Mengen von telogenen Einzelhaaren schwankte die Reproduzierbarkeit der maSTR-Ergebnisse, war jedoch stets dem PowerPlex? ESX 17-Kit in Bezug auf die Allel-Recovery überlegen.
Ein Vergleich mit zwei, hinsichtlich der Längenverteilung der Amplikons komplementären CE-basierten STR-Kits (PowerPlex? ESX 17 und ESI 17 Fast), sowie mit einem kommerziellen NGS-Kit (ForenSeq? DNA Signature Prep) ergab, dass nicht die Technik der NGS, sondern die Kürze der Amplikons der wichtigste Faktor zur Typisierung degradierter DNA ist. Der maSTR-Assay wies in allen Vergleichen mit den genutzten kommerziellen Kits jedoch eine höhere Anzahl an Allelic Drop-ins auf. Diese traten umso häufiger auf, je geringer die verwendete DNA-Menge und je stärker degradiert diese war.
Weil Profile mit Allelic Drop-ins Mischprofilen entsprechen, wurden die per maSTR-Assay generierten STR-Profile mit Verfahren zur Interpretation von Mischspuren untersucht. Bei der Composite-Interpretation werden alle vorkommenden Allele von Replikaten gezählt, bei der Consensus-Interpretation lediglich die reproduzierbaren Allele. Dabei stellte sich heraus, dass im Fall von wenigen Allelic Drop-ins (PowerPlex? ESX 17-generierte Profile) die Composite-Interpretation und bei Allelic Drop-in-haltigen Profilen (maSTR-generierte Profile) die Consensus-Interpretation am besten geeignet ist.
Schließlich wurde mittels der GenoProof Mixture 3-Software untersucht, inwieweit semi- und vollständig kontinuierliche probabilistische Verfahren bei der biostatistischen Bewertung der DNA-Profile aus Einzelhaaren geeignet sind. Dabei zeigte sich, dass der maSTR-Assay aufgrund der hohen Anzahl an Allelic Drop-ins den CE-basierten Methoden nur in Fällen von DNA leicht überlegen ist, die in ausreichender Menge und gering degradiert vorliegt. In diesem Bereich gelingt die Zuordnung des Profils aus Haaren zum Referenzprofil jedoch ebenfalls mittels CE-basierten Methoden.
Aus allen Ergebnissen wurde eine Empfehlung für die Handhabung von DNA aus ausgefallenen Einzelhaaren abgeleitet, die auf dem DNA-Degradierungsgrad in Kombination mit der DNA-Menge basiert. Die vorliegende Arbeit schafft somit eine Grundlage, um ausgefallene Einzelhaare in der Routine-Arbeit von kriminaltechnischen Ermittlungen nutzbar zu machen, sowie gegebenenfalls auf andere Spurentypen mit degradierter DNA geringer Menge anzuwenden. Dadurch könnte die Nutzbarkeit solcher Spurentypen für die forensische Kriminalistik erhöht werden, insbesondere wenn die standardmäßig verwendeten CE-basierten Methoden versagen. Perspektivisch ist die Technik der NGS aufgrund der großen Multiplexierbarkeit uniformer, kurzer Marker generell der CE-basierten Technik bei der Typisierung degradierter DNA überlegen.
Collaboration among multiple users on large screens leads to complicated behavior patterns and group dynamics. To gain a deeper understanding of collaboration on vertical, large, high-resolution screens, this dissertation builds on previous research and gains novel insights through new observational studies. Among other things, the collected results reveal new patterns of collaborative coupling, suggest that territorial behavior is less critical than shown in previous research, and demonstrate that workspace awareness can also negatively affect the effectiveness of individual users.
In this thesis it is posed that the central object of preference discovery is a co-creative process in which the Other can be represented by a machine. It explores efficient methods to enhance introverted intuition using extraverted intuition's communication lines. Possible implementations of such processes are presented using novel algorithms that perform divergent search to feed the users' intuition with many examples of high quality solutions, allowing them to take influence interactively. The machine feeds and reflects upon human intuition, combining both what is possible and preferred. The machine model and the divergent optimization algorithms are the motor behind this co-creative process, in which machine and users co-create and interactively choose branches of an ad hoc hierarchical decomposition of the solution space.
The proposed co-creative process consists of several elements: a formal model for interactive co-creative processes, evolutionary divergent search, diversity and similarity, data-driven methods to discover diversity, limitations of artificial creative agents, matters of efficiency in behavioral and morphological modeling, visualization, a connection to prototype theory, and methods to allow users to influence artificial creative agents. This thesis helps putting the human back into the design loop in generative AI and optimization.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
At the end of 2019, about 4.1 billion people on earth were using the internet. Because people entrust their most intimate and private data to their devices, the European legislation has declared the protection of natural persons in relation to the processing of personal data as a fundamental right. In 2018 23 million people worldwide, having the responsibility of implementing data security and privacy, were developing software. However, the implementation of data and application security is a challenge, as evidenced by over 41 thousand documented security incidents in 2019. Probably the most basic, powerful, and frequently used tools software developers work with are Application Programming Interfaces (APIs). Security APIs are essential tools to bring data and application security into software products. However, research results have revealed that usability problems of security APIs lead to insecure API use during development. Basic security requirements such as securely stored passwords, encrypted files or secure network connections can become an error-prone challenge and in consequence lead to unreliable or missing security and privacy. Because software developers hold a key position in the development processes of software, not properly operating security tools pose a risk to all people using software. However, little is known about the requirements of developers to address the problem and improve the usability of security APIs. This thesis is one of the first to examine the usability of security APIs. To this end, the author examines to what extent information flows can support software developers in using security APIs to implement secure software by conducting empirical studies with software developers. This thesis has contributed fundamental results that can be used in future work to identify and improve important information flows in software development. The studies have clearly shown that developer-tailored information flows with adapted security-relevant content have a positive influence on the correct implementation of security. However, the results have also led to the conclusion that API producers need to pay special attention to the channels through which they direct information flows to API users and how the information is designed to be useful for them. In many cases, it is not enough to provide security-relevant information via the documentation only. Here, proactive methods like the API security advice proposed by this thesis achieve significantly better results in terms of findability and actionable support. To further increase the effectiveness of the API security advice, this thesis developed a cryptographic API warning design for the terminal by adopting a participatory design approach with experienced software developers. However, it also became clear that a single information flow can only support up to a certain extent. As observed from two studies conducted in complex API environments in web development, multiple complementary information flows have to meet the extensive information needs of developers to be able to develop secure software. Some evaluated new approaches provided promising insights towards more API consumer-focused documentation designs as a complement to API warnings.
Despite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints.
Since its advent, the sustainability effects of the modern sharing economy have been the subject of controversial debate. While its potential was initially discussed in terms of post-ownership development with a view to decentralizing value creation and increasing social capital and environmental relief through better utilization of material goods, critics have become increasingly loud in recent years. Many people hoped that carsharing could lead to development away from ownership towards flexible use and thus more resource-efficient mobility. However, carsharing remains niche, and while many people like the idea in general, they appear to consider carsharing to not be advantageous as a means of transport in terms of cost, flexibility, and comfort. A key innovation that could elevate carsharing from its niche existence in the future is autonomous driving. This technology could help shared mobility gain a new boost by allowing it to overcome the weaknesses of the present carsharing business model. Flexibility and comfort could be greatly enhanced with shared autonomous vehicles (SAVs), which could simultaneously offer benefits in terms of low cost, and better use of time without the burden of vehicle ownership. However, it is not the technology itself that is sustainable; rather, sustainability depends on the way in which this technology is used. Hence, it is necessary to make a prospective assessment of the direct and indirect (un)sustainable effects before or during the development of a technology in order to incorporate these findings into the design and decision-making process. Transport research has been intensively analyzing the possible economic, social, and ecological consequences of autonomous driving for several years. However, research lacks knowledge about the consequences to be expected from shared autonomous vehicles. Moreover, previous findings are mostly based on the knowledge of experts, while potential users are rarely included in the research. To address this gap, this thesis contributes to answering the questions of what the ecological and social impacts of the expected concept of SAVs will be. In my thesis, I study in particular the ecological consequences of SAVs in terms of the potential modal shifts they can induce as well as their social consequences in terms of potential job losses in the taxi industry. Regarding this, I apply a user-oriented, mixed-method technology assessment approach that complements existing, expert-oriented technology assessment studies on autonomous driving that have so far been dominated by scenario analyses and simulations. To answer the two questions, I triangulated the method of scenario analysis and qualitative and quantitative user studies. The empirical studies provide evidence that the automation of mobility services such as carsharing may to a small extent foster a shift from the private vehicle towards mobility on demand. However, findings also indicate that rebound effects are to be expected: Significantly more users are expected to move away from the more sustainable public transportation, leading to an overcompensation of the positive modal shift effects by the negative modal shift effects. The results show that a large proportion of the taxi trips carried out can be re-placed by SAVs, making the profession of taxi driver somewhat obsolete. However, interviews with taxi drivers themselves revealed that the services provided by the drivers go beyond mere transport, so that even in the age of SAVs, the need for human assistance will continue – though to a smaller extent. Given these findings, I see action potential at different levels: users, mobility service providers, and policymakers. Regarding environmental and social impacts resulting from the use of SAVs, there is a strong conflict of objectives among users, potential SAV operators, and sustainable environmental and social policies. In order to strengthen the positive effects and counteract the negative effects, such as unintended modal shifts, policies may soon have to regulate the design of SAVs and their introduction. A key starting point for transport policy is to promote the use of more environmentally friendly means of transport, in particular by making public transportation attractive and, if necessary, by making the use of individual motorized mobility less attractive. The taxi industry must face the challenges of automation by opening up to these developments and focusing on service orientation – to strengthen the drivers’ main unique selling point compared to automated technology. Assessing the impacts of the not-yet-existing generally involves great uncertainty. With the results of my work, however, I would like to argue that a user-oriented technology assessment can usefully complement the findings of classic methods of technology assessment and can iteratively inform the development process regarding technology and regulation.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
For a sustainable development the electricity sector needs to be decarbonized. In 2017 only 54% of the West African households had access to the electrical grid. Thus, renewable sources should play a major role for the development of the power sector in West Africa. Above all, solar power shows highest potential of renewable energy sources. However, it is highly variable, depending on the atmospheric conditions. This study addresses the challenges for a solar based power system in West Africa by analyzing the atmospheric variability of solar power. For this purpose, two aspects are investigated. In the first part, the daily power reduction due to atmospheric aerosols is quantified for different solar power technologies. Meteorological data at six ground-based stations is used to model photovoltaic and parabolic trough power during all mostly clear-sky days in 2006. A radiative transfer model is combined with solar power model. The results show, that the reduction due to aerosols can be up to 79% for photovoltaic and up to 100% for parabolic trough power plants during a major dust outbreak. Frequent dust outbreaks occurring in West Africa would cause frequent blackouts if sufficient storage capacities are not available. On average, aerosols reduce the daily power yields by 13% to 22% for photovoltaic and by 22% to 37% for parabolic troughs. For the second part, long-term atmospheric variability and trends of solar irradiance are analyzed and their impact on photovoltaic yields is examined for West Africa. Based on a 35-year satellite data record (1983 - 2017) the temporal and spatial variability and general trend are depicted for global and direct horizontal irradiances. Furthermore, photovoltaic yields are calculated on a daily basis. They show a strong meridional gradient with highest values of 5 kWh/kWp in the Sahara and Sahel zone and lowest values in southern West Africa (around 4 kWh/kWp). Thereby, the temporal variability is highest in southern West Africa (up to around 18%) and lowest in the Sahara (around 4.5%). This implies the need of a North-South grid development, to feed the increasing demand on the highly populated coast by solar power from the northern parts of West Africa. Additionally, global irradiances show a long-term positive trend (up to +5 W/m²/decade) in the Sahara and a negative trend (up to -5 W/m²/decade) in southern West Africa. If this trend is continuing, the spatial differences in solar power potential will increase in the future. This thesis provides a better understanding of the impact of atmospheric variability on solar power in a challenging environment like West Africa, characterized by the strong influence of the African monsoon. Thereby, the importance of aerosols is pointed out. Furthermore, long-term changes of irradiance are characterized concerning their implications for photovoltaic power.
Bedingt durch die zunehmende Rohstoffknappheit rückt die Suche nach alternativen, nachhaltigen Rohstoffen immer mehr in den Vordergrund. Im Hinblick auf effiziente chemische Verwertbarkeit bietet Lignin zahlreiche Vorteile für verschiedene Anwendungsbereiche, beispielsweise für biobasierte Polyurethanbeschichtungen, etwa zum Korrosionsschutz. Wesentliche Probleme bei der Verwendung von Lignin ergeben sich durch die Heterogenität dieses Naturstoffes sowie durch dessen geringe Polymerisations-Kompatibilität mit Polyolefinen; beide Faktoren beeinflussen u. a die mechanischen Eigenschaften entsprechender Lignin-basierter Polymere. Zudem hängt die konkrete Struktur und damit auch die physikalisch/chemischen Eigenschaften des Lignins stark von der jeweiligen Rohstoffquelle sowie dem Extraktionsverfahren ab.
Ziel dieser Arbeit war die Strukturaufklärung unmodifizierter und modifizierter Kraft-Lignine (KL) und die Untersuchung der Reaktivität aromatischer wie aliphatischer Hydroxygruppen in Abhängigkeit vom pH-Wert. Hierzu wurden unmodifizierte KL aus Schwarzlauge extrahiert und nachfolgend zunächst einer Soxhlet-Extraktion unterzogen, um in Methyltetrahydrofuran lösliche Lignin-Bestandteile – vornehmlich mit aromatischem Charakter – zu gewinnen und so eine verbesserte Löslichkeit auch im bei der nachfolgenden Polyurethansynthese als Lösemittel verwendeten THF zu gewährleisten. Überdies wurden die extrahierten KL via Demethylierung von Methoxygruppen chemisch modifiziert. Zudem wurde mittels nasschemischer Methoden sowie mit differentieller UV/VIS-Spektroskopie die Anzahl an für die Polymerisation erforderliche Hydroxygruppen quantifiziert. Im Anschluss erfolgte, unter besonderer Berücksichtigung ökologischer und ökonomischer Nachhaltigkeitsaspekte, die Synthese Lignin-basierter und funktionalisierter Polyurethanbeschichtungen. Die Oberflächenfunktionalisierung gestattete die Verbesserung der Oberflächenhomogenität sowie - via blend formation - das Einbetten von TPM-Farbstoffen in die Coatings. Hinsichtlich des Einflusses des bei der Extraktion gewählten pH-Wertes (pH = 2 - 5) auf das Verhalten der so gewonnenen KL wurde eine Veränderung sowohl der Struktur der Lignine als auch deren thermischer Stabilität beobachtet. Zudem wurde nachgewiesen, dass mit steigendem pH-Wert die Funktionalität/Reaktivität der aromatischen wie aliphatischen Hydroxygruppen im Lignin zunimmt. Aus unmodifiziertem KL wurden erfolgreich homogene Lignin-basierte Polyurethan-Coatings (LPU-Coatings) synthetisiert; diese LPU-Coatings zeigten bei Verwendung von bei höheren pH-Werten extrahierten KL homogenere, hydrophobe Oberflächenbeschaffenheit sowie gute thermische Stabilität. Zusätzliche Modifizierung der KL durch Demethylierung führte wegen der gesteigerten Anzahl freier Hydroxygruppen zu moderater Reaktivitätssteigerung und damit zu weiterer Verbesserung der Oberflächeneigenschaften hinsichtlich einer homogenen Oberflächenstruktur und -brillanz. Im Hinblick auf den Aspekt der Nachhaltigkeit wurden durch Syntheseoptimierung - bestehend aus Einstellung der Rohstoff-Korngröße, Ultraschallbehandlung und Verwendung des kommerziellen trifunktionellen Polyetherpolyols Lupranol® 3300 in Kombination mit Desmodur® L75 - die Löslichkeit von Lignin im Polyol sowie die thermische Stabilität der LPU-Coatings erhöht. Im Zuge der Syntheseoptimierungen konnte durch verkürzte Trocknungszeiten Energieeinsparung erzielt werden; zudem ließen sich dabei die eingesetzten Mengen kommerziell erhältlicher Chemikalien verringern; beide Einsparungen führten zu Kostenreduktion. Zugleich ließ sich so nicht nur der KL-Anteil im Polymer-Coating erhöhen: Durch eine optimierte wirtschaftliche Einstufensynthese ließ sich die Umsetzung dieser Vorgehensweise auch im Rahmen industrieller Anwendungen vereinfachen. Das Einbetten ausgewählter TPM-Farbstoffe (Kristallviolett und Brilliantgrün) in die LPU-Coatings durch blend formation führte nachweislich zu antimikrobieller Wirkung der Oberflächenbeschichtung, ohne dass die Oberflächenbeschaffenheit an Homogenität verlor. Die im Rahmen dieser Arbeit synthetisierten LPU-Coatings könnten zukünftig als Korrosionsschutz- und antimikrobielle-Beschichtungen ihre Anwendung finden, z. B. in der Landwirtschaft und im Bausektor.
Die im Rahmen der vorliegenden Arbeit gewonnen Erkenntnisse liefern einen Beitrag zur strukturellen Aufklärung des komplexen Biopolymers Lignin. Darüber hinaus stellen die Untersuchungen und Ergebnisse eine Grundlage für eine nachhaltige Herstellung von Lignin-basierten Polymerbeschichtungen dar, die in Zukunft immer mehr an Bedeutung gewinnen werden.
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning.
Discrimination and classification of eight strains related to meat spoilage microorganisms commonly found in poultry meat were successfully carried out using two dispersive Raman spectrometers (Microscope and Portable Fiber-Optic systems) in combination with chemometric methods. Principal Components Analysis (PCA) and Multi-Class Support Vector Machines (MC-SVM) were applied to develop discrimination and classification models. These models were certified using validation data sets which were successfully assigned to the correct bacterial genera and even to the right strain. The discrimination of bacteria down to the strain level was performed for the pre-processed spectral data using a 3-stage model based on PCA. The spectral features and differences among the species on which the discrimination was based were clarified through PCA loadings. In MC-SVM the pre-processed spectral data was subjected to PCA and utilized to build a classification model. When using the first two components, the accuracy of the MC-SVM model was 97.64% and 93.23% for the validation data collected by the Raman Microscope and the Portable Fiber-Optic Raman system, respectively. The accuracy reached 100% for the validation data by using the first eight and ten PC’s from the data collected by Raman Microscope and by Portable Fiber-Optic Raman system, respectively. The results reflect the strong discriminative power and the high performance of the developed models, the suitability of the pre-processing method used in this study and that the low accuracy of the Portable Fiber-Optic Raman system does not adversely affect the discriminative power of the developed models.
Neben der individuellen Bedeutung von Gesundheit für jeden Menschen, steigt auch die Relevanz von „gesunden Beschäftigten“. Gerade in Zeiten von Vollbeschäftigung, Fachkräftemangel und höherem Renteneintrittsalter, rückt die Gesundheit der Beschäftigten und die damit verbundene Arbeitsfähigkeit jedes Einzelnen stärker in den Fokus. Staat, Sozialversicherungsträger und Unternehmen sind zunehmend daran interessiert, Arbeitsplätze und Arbeitsbedingungen gesundheitsförderlich zu gestalten. Hierbei bildet die BGF den Rahmen für die existierenden gesundheitsförderlichen Interventionen, die in einer Vielzahl im betrieblichen Setting vorzufinden sind. Die Arbeitspause kann in diesem Kontext als geeignete Intervention angesehen werden, die jedoch sehr vielfältig in der Ausgestaltung sein kann.
In forensic DNA profiling, the occurrence of complex mixed profiles is currently a common issue. Cases involving intimate swabs or skin flake tape liftings are prone to mixed profiles, because of more than one donor contributing to a DNA sample. By DNA profiling of single spermatozoa and skin flakes, problems associated with mixed profile could ideally be overcome. However, PCR is not a sensitive enough method to generate DNA profiles by STRs on single cells. Moreover, high quality intact DNA is required, but is not always available in skin flakes due to degradation. Additionally, single skin flakes are difficult to discriminate from other similar looking particles on tape liftings used to secure DNA samples from evidence. The main purpose of this study was to develop a method that enables DNA profiling of single sperm cells and skin flakes. After studying multiple whole genome amplification (WGA) protocols, REPLI-g Single Cell WGA was selected due to its suitability in the pre-amplification step of template DNA. Micromanipulation was used to isolate single spermatozoa. Furthermore, micromanipulation in combination with REPLI-g Single Cell WGA resulted in successful DNA profiling of single spermatozoa by using autosomal STRs as well as X- and Y-chromosomal STRs. The single spermatozoa DNA profiling method described in this thesis was successfully used to identify male contributors from mock intimate swabs with a mixture of semen from multiple male contributors. Different dyes were analysed to develop a staining method to discriminate skin flakes from other particles including particles such as those from hair cosmetic products. From all dyes tested, Orange G was the only dye which successfully discriminated skin flakes from hair product particles. Also, an alkaline based lysis protocol was developed that allowed PCR to be carried out directly on the lysates of single skin flakes. Furthermore, REPLI-g Single Cell WGA was tested on single skin flakes. In contrast to the single spermatozoa, REPLI-g Single Cell WGA was not successful in DNA profiling of single skin flakes. The single skin flake DNA profiling method described in this thesis was successfully used in correctly identifying contributors from mock mixed DNA evidence. Additionally, a small amplicon-based NGS method was tested on single skin flakes. Compared to the PCR and CE approach, the small amplicon-based NGS method improved DNA profiling of single skin flakes, giving a significant increase in allele recovery. In conclusion, this study shows circumventing mixtures is possible by DNA profiling of single spermatozoa, using micromanipulation and WGA. Furthermore, DNA profiling of single skin flakes has been improved by the staining of tape liftings methodology with Orange G, alkaline lysis, direct-PCR and a small amplicon-based NGS approach. Nonetheless, future work is required to assess the performance of the single spermatozoa method on mock swabs with more diluted semen. Also, commercially available NGS kits should be tested with single skin flakes and compared with the in-house NGS method.
Due to the popularity of the Internet and the networked services that it facilitates, networked devices have become increasingly common in both the workplace and everyday life in recent years—following the trail blazed by smartphones. The data provided by these devices allow for the creation of rich user profiles. As a result, the collection, processing and exchange of such personal data have become drivers of economic growth. History shows that the adoption of new technologies is likely to influence both individual and societal concepts of privacy. Research into privacy has therefore been confronted with continuously changing concepts due to technological progress. From a legal perspective, privacy laws that reflect social values are sought. Privacy enhancing technologies are developed or adapted to take account of technological development. Organizations must also identify protective measures that are effective in terms of scalability and automation. Similarly, research is being conducted from the perspective of Human-Computer Interaction (HCI) to explore design spaces that empower individuals to manage their protection needs with regard to novel data, which they may perceive as sensitive. Taking such an HCI perspective with regard to understanding privacy management on the Internet of Things (IoT), this research mainly focuses on three interrelated goals across the fields of application: 1. Exploring and analyzing how people make sense of data, especially when managing privacy and data disclosure; 2. Identifying, framing and evaluating potential resources for designing sense-making processes; and 3. Exploring the fitness of the identified concepts for inclusion in legal and technical perspectives on supporting decisions regarding privacy on the IoT. Although this work's point of departure is the HCI perspective, it emphasizes the importance of the interrelationships among seemingly independent perspectives. Their interdependence is therefore also emphasized and taken into account by subscribing to a user-centered design process throughout this study. More specifically, this thesis adopts a design case study approach. This approach makes it possible to conduct full user-centered design lifecycles in a concrete application case with participants in the context of everyday life. Based on this approach, it was possible to investigate several domains of the IoT that are currently relevant, namely smart metering, smartphones, smart homes and connected cars. The results show that the participants were less concerned about (raw) data than about the information that could potentially be derived from it. Against the background of the constant collection of highly technical and abstract data, the content of which only becomes visible through the application of complex algorithms, this study indicates that people should learn to explore and understand these data flexibly, and provides insights in how to design for supporting this aim. From the point of view of design for usable privacy protection measures, the information that is provided to users about data disclosure should be focused on the consequences thereof for users' environments and life. A related concept from law is “informed consent,” which I propose should be further developed in order to implement usable mechanisms for individual privacy protection in the era of the IoT. Finally, this thesis demonstrates how research on HCI can be methodologically embedded in a regulative process that will inform both the development of technology and the drafting of legislation.
Solving differential-algebraic equations (DAEs) efficiently by means of appropriate numerical schemes for time-integration is an ongoing topic in applied mathematics. In this context, especially when considering large systems that occur with respect to many fields of practical application effective computation becomes relevant. In particular, corresponding examples are given when having to simulate network structures that consider transport of fluid and gas or electrical circuits. Due to the stiffness properties of DAEs, time-integration of such problems generally demands for implicit strategies. Among the schemes that prove to be an adequate choice are linearly implicit Rung-Kutta methods in the form of Rosenbrock-Wanner (ROW) schemes. Compared to fully implicit methods, they are easy to implement and avoid the solution of non-linear equations by including Jacobian information within their formulation. However, Jacobian calculations are a costly operation. Hence, necessity of having to compute the exact Jacobian with every successful time-step proves to be a considerable drawback. To overcome this drawback, a ROW-type method is introduced that allows for non-exact Jacobian entries when solving semi-explicit DAEs of index one. The resulting scheme thus enables to exploit several strategies for saving computational effort. Examples include using partial explicit integration of non-stiff components, utilizing more advantageous sparse Jacobian structures or making use of time-lagged Jacobian information. In fact, due to the property of allowing for non-exact Jacobian expressions, the given scheme can be interpreted as a generalized ROW-type method for DAEs. This is because it covers many different ROW-type schemes known from literature. To derive the order conditions of the ROW-type method introduced, a theory is developed that allows to identify occurring differentials and coefficients graphically by means of rooted trees. Rooted trees for describing numerical methods were originally introduced by J.C. Butcher. They significantly simplify the determination and definition of relevant characteristics because they allow for applying straightforward procedures. In fact, the theory presented combines strategies used to represent ROW-type methods with exact Jacobian for DAEs and ROW-type methods with non-exact Jacobian for ODEs. For this purpose, new types of vertices are considered in order to describe occurring non-exact elementary differentials completely. The resulting theory thus automatically comprises relevant approaches known from literature. As a consequence, it allows to recognize order conditions of familiar methods covered and to identify new conditions. With the theory developed, new sets of coefficients are derived that allow to realize the ROW-type method introduced up to orders two and three. Some of them are constructed based on methods known from literature that satisfy additional conditions for the purpose of avoiding effects of order reduction. It is shown that these methods can be improved by means of the new order conditions derived without having to increase the number of internal stages. Convergence of the resulting methods is analyzed with respect to several academic test problems. Results verify the theory determined and the order conditions found as only schemes satisfying the order conditions predicted preserve their order when using non-exact Jacobian expressions.
The present thesis elucidates the development of (i) a series of small molecule inhibitors reacting in a covalent-irreversible manner with the targeted proteases and (ii) a fluorescently labeled activity-based probe as a pharmacological tool compound for investigation of specific functions of the mentioned enzymes in vitro. Herein, the rational design, organic synthesis and quantitative structure-activity-relationships are described extensively.
The globalisation and the increasing international trade have raised the number and risk of introduction of foreign species and invasive pests for years. Although native species have adapted to the native habitat over many years and generations, invasive intruders often possess characteristics that are superior to native species. Thus, and because of a lack of natural enemies, they bear the potential of decimation or complete displacement of the native species; furthermore, the introduction of pathogens or nematodes as a vector possesses a high damage potential. The available measures of the local plant protection services to combat invasive species are confined. They are limited to the felling of infested trees or plants and regular controls within the infested area. A spread of single infestations can thereby be prevented, but undetected infestations can unimpededly spread, which points out the main challenge: the detection of the species. This concerns the infestation in open land as well as the single animal on its path of introduction. Concerning the development of new adequate detection systems for invasive species, there is only little research activity going on. For other fields like detection of explosives or narcotics, the research activities date back for more than one decade and consequently there are detection systems available, which are, for example, used for explosive detection in airports. The detection principle bases on the chemistry of these substances.
Evaluation and Optimization of IEEE802.11 multi-hop Backhaul Networks with Directional Antennas
(2020)
A major problem for rural areas is the inaccessibility to affordable broadband Internet connections. In these areas distances are large, and digging a cable into the ground is extremely expensive, considering the small number of potential customers at the end of that cable. This leads to a digital divide, where urban areas enjoy a high-quality service at low cost, while rural areas suffer from the reverse.
This work is dedicated to an alternative technical approach aiming to reduce the cost for Internet Service Provider in rural areas: WiFi-based Long Distance networks. A set of significant contributions of technology related aspects of WiFi-based Long Distance networks is described in three different fields: Propagation on long distance Wi-Fi links, MAC-layer scheduling and Interference modeling and Channel Assignment with directional antennas.
For each field, the author composes and discusses the state-of-the-art. Afterwards, the author derives research questions and tackles several open issues to develop these kinds of networks further towards a suitable technology for the backhaul segment.
In this thesis, unique administrative data, a relevant time of follow-up and advanced statistical measures to handle confounding have been utilized in order to provide new and informative evidence on the effects of vocational rehabilitation programs on work participation outcomes in Germany. While re-affirming the important role of micro-level determinants, the present study provides an extensive example of the individual and fiscal effects that are possible through meaningful vocational rehabilitation measures. The analysis showed that the principal objective, namely, to improve participation in employment, was generally achieved. Contrary to the common misconception that “off-the-job training” is relatively ineffective, this thesis has provided an empirical example of the positive impact of the programs.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
The initially large number of variants is reduced by applying custom variant annotation and filtering procedures. This requires complex software toolchains to be set up and data sources to be integrated. Furthermore, increasing study sizes subsequently require higher efforts to manage datasets in a multi-user and multi-institution environment. It is common practice to expect numerous iterations of continuative respecification and refinement of filter strategies, when the cause for a disease or phenotype is unknown. Data analysis support during this phase is fundamental, because handling the large volume of data is not possible or inadequate for users with limited computer literacy. Constant feedback and communication is necessary when filter parameters are adjusted or the study grows with additional samples. Consequently, variant filtering and interpretation becomes time-consuming and hinders a dynamic and explorative data analysis by experts.