Refine
H-BRS Bibliography
- yes (70) (remove)
Departments, institutes and facilities
- Graduierteninstitut (66)
- Fachbereich Angewandte Naturwissenschaften (26)
- Fachbereich Informatik (22)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (8)
- Fachbereich Wirtschaftswissenschaften (6)
- Fachbereich Ingenieurwissenschaften und Kommunikation (5)
- Institut für Verbraucherinformatik (IVI) (4)
- Institute of Visual Computing (IVC) (4)
- Fachbereich Sozialpolitik und Soziale Sicherung (3)
- Institut für Cyber Security & Privacy (ICSP) (3)
Document Type
- Doctoral Thesis (70) (remove)
Year of publication
Has Fulltext
- no (70)
Keywords
- Lignin (3)
- Antioxidans (2)
- Evolutionary optimization (2)
- Gitter-Boltzmann-Methode (2)
- Human-Computer Interaction (2)
- Nachhaltigkeit (2)
- Quality diversity (2)
- Robotics (2)
- Spektroskopie (2)
- Strömungssimulation (2)
Solving differential-algebraic equations (DAEs) efficiently by means of appropriate numerical schemes for time-integration is an ongoing topic in applied mathematics. In this context, especially when considering large systems that occur with respect to many fields of practical application effective computation becomes relevant. In particular, corresponding examples are given when having to simulate network structures that consider transport of fluid and gas or electrical circuits. Due to the stiffness properties of DAEs, time-integration of such problems generally demands for implicit strategies. Among the schemes that prove to be an adequate choice are linearly implicit Rung-Kutta methods in the form of Rosenbrock-Wanner (ROW) schemes. Compared to fully implicit methods, they are easy to implement and avoid the solution of non-linear equations by including Jacobian information within their formulation. However, Jacobian calculations are a costly operation. Hence, necessity of having to compute the exact Jacobian with every successful time-step proves to be a considerable drawback. To overcome this drawback, a ROW-type method is introduced that allows for non-exact Jacobian entries when solving semi-explicit DAEs of index one. The resulting scheme thus enables to exploit several strategies for saving computational effort. Examples include using partial explicit integration of non-stiff components, utilizing more advantageous sparse Jacobian structures or making use of time-lagged Jacobian information. In fact, due to the property of allowing for non-exact Jacobian expressions, the given scheme can be interpreted as a generalized ROW-type method for DAEs. This is because it covers many different ROW-type schemes known from literature. To derive the order conditions of the ROW-type method introduced, a theory is developed that allows to identify occurring differentials and coefficients graphically by means of rooted trees. Rooted trees for describing numerical methods were originally introduced by J.C. Butcher. They significantly simplify the determination and definition of relevant characteristics because they allow for applying straightforward procedures. In fact, the theory presented combines strategies used to represent ROW-type methods with exact Jacobian for DAEs and ROW-type methods with non-exact Jacobian for ODEs. For this purpose, new types of vertices are considered in order to describe occurring non-exact elementary differentials completely. The resulting theory thus automatically comprises relevant approaches known from literature. As a consequence, it allows to recognize order conditions of familiar methods covered and to identify new conditions. With the theory developed, new sets of coefficients are derived that allow to realize the ROW-type method introduced up to orders two and three. Some of them are constructed based on methods known from literature that satisfy additional conditions for the purpose of avoiding effects of order reduction. It is shown that these methods can be improved by means of the new order conditions derived without having to increase the number of internal stages. Convergence of the resulting methods is analyzed with respect to several academic test problems. Results verify the theory determined and the order conditions found as only schemes satisfying the order conditions predicted preserve their order when using non-exact Jacobian expressions.
The art of nudging
(2023)
Do simple and subtle changes in the living and study environment improve the eating behaviour of students in an educational setting? This dissertation provides a not-so-simple answer to this simple question based on the outcomes of four studies that explore the effects and design of artwork nudges (specifically the artwork of Alberto Giacometti) on the eating behaviour of students by applying different research designs. Study 1 explores the effects of a Giacometti-like nudge (a more contemporary version of the original nudge) regarding the dietary behaviour of high school students in a controlled setting. Study 2 applies different artwork nudges within a virtual vignette setting to measure their effects on virtual meal choices made. Also, the degree to which individuals were aware of the nudge’s presence is included as an influential factor in nudge effectiveness. Study 3 assesses the susceptibility to nudges as measured with a questionnaire. Susceptibility to nudges is defined as nudgeability. Study 4 assesses the effects of the original Giacometti nudge in a real-world university cafeteria setting. Specifically, the immediate and sustained effects of the original Giacometti nudge on students’ meal purchases in the university cafeteria are considered. In addition, the role of awareness of the nudge’s presence as well as the acceptance of this specific nudge are discussed. The conclusion is drawn that the original Giacometti nudge should only be applied in an educational setting to improve healthy eating behaviour if the intended target groups and environment meet certain conditions. Artwork nudges in general should be applied only after rigorous testing of various types of different nudges and more research reflecting healthy eating in its entirety.
Microorganisms not only contribute to the spoilage of food but can also cause illnesses through consumption. Consumer concerns and doubts about the shelf life of the products and the resulting enormous amounts of food waste have led to a demand for a rapid, robust, and non-destructive method for the detection of microorganisms, especially in the food sector. Therefore, a rapid and simple sampling method for the Raman- and infrared (IR)-microspectroscopic study of microorganisms associated with spoilage processes was developed. For subsequent evaluation pre-processing routines, as well as chemometric models for classification of spoilage microorganisms were developed. The microbiological samples are taken using a disinfectable sampling stamp and measured by microspectroscopy without the usual pre-treatments such as purification separation, washing, and centrifugation. The resulting complex multivariate data sets were pre-processed, reduced by principal component analysis, and classified by discriminant analysis. Classification of independent unlabeled test data showed that microorganisms could be classified at genus, species, and strain levels with an accuracy of 96.5 % (Raman) and 94.5 % (IR), respectively, despite large biological differences and novel sampling strategies. As bacteria are exposed to constantly changing conditions and their adaptation mechanisms may make them inaccessible to conventional measurement methods, the methods and models developed were investigated for their suitability for microorganisms exposed to stress. Compared to normal growth conditions, spectral changes in lipids, polysaccharides, nucleic acids, and proteins were observed in microorganisms exposed to stress. Models were developed to discriminate microorganisms, independent of the involvement of various stress factors and storage times. Classification of the investigated bacteria yielded accuracies of 97.6 % (Raman) and 96.6 % (IR), respectively, and a robust and meaningful model was developed to discriminate different microorganisms at the genus, species, and strain levels. The obtained results are very promising and show that the methods and models developed for the discrimination of microorganisms as well as the investigation of stress factors on microorganisms by means of Raman- and IR-microspectroscopy have the potential to be used, for example, in the food sector for the rapid determination of surface contamination.
Bedingt durch die zunehmende Rohstoffknappheit rückt die Suche nach alternativen, nachhaltigen Rohstoffen immer mehr in den Vordergrund. Im Hinblick auf effiziente chemische Verwertbarkeit bietet Lignin zahlreiche Vorteile für verschiedene Anwendungsbereiche, beispielsweise für biobasierte Polyurethanbeschichtungen, etwa zum Korrosionsschutz. Wesentliche Probleme bei der Verwendung von Lignin ergeben sich durch die Heterogenität dieses Naturstoffes sowie durch dessen geringe Polymerisations-Kompatibilität mit Polyolefinen; beide Faktoren beeinflussen u. a die mechanischen Eigenschaften entsprechender Lignin-basierter Polymere. Zudem hängt die konkrete Struktur und damit auch die physikalisch/chemischen Eigenschaften des Lignins stark von der jeweiligen Rohstoffquelle sowie dem Extraktionsverfahren ab.
Ziel dieser Arbeit war die Strukturaufklärung unmodifizierter und modifizierter Kraft-Lignine (KL) und die Untersuchung der Reaktivität aromatischer wie aliphatischer Hydroxygruppen in Abhängigkeit vom pH-Wert. Hierzu wurden unmodifizierte KL aus Schwarzlauge extrahiert und nachfolgend zunächst einer Soxhlet-Extraktion unterzogen, um in Methyltetrahydrofuran lösliche Lignin-Bestandteile – vornehmlich mit aromatischem Charakter – zu gewinnen und so eine verbesserte Löslichkeit auch im bei der nachfolgenden Polyurethansynthese als Lösemittel verwendeten THF zu gewährleisten. Überdies wurden die extrahierten KL via Demethylierung von Methoxygruppen chemisch modifiziert. Zudem wurde mittels nasschemischer Methoden sowie mit differentieller UV/VIS-Spektroskopie die Anzahl an für die Polymerisation erforderliche Hydroxygruppen quantifiziert. Im Anschluss erfolgte, unter besonderer Berücksichtigung ökologischer und ökonomischer Nachhaltigkeitsaspekte, die Synthese Lignin-basierter und funktionalisierter Polyurethanbeschichtungen. Die Oberflächenfunktionalisierung gestattete die Verbesserung der Oberflächenhomogenität sowie - via blend formation - das Einbetten von TPM-Farbstoffen in die Coatings. Hinsichtlich des Einflusses des bei der Extraktion gewählten pH-Wertes (pH = 2 - 5) auf das Verhalten der so gewonnenen KL wurde eine Veränderung sowohl der Struktur der Lignine als auch deren thermischer Stabilität beobachtet. Zudem wurde nachgewiesen, dass mit steigendem pH-Wert die Funktionalität/Reaktivität der aromatischen wie aliphatischen Hydroxygruppen im Lignin zunimmt. Aus unmodifiziertem KL wurden erfolgreich homogene Lignin-basierte Polyurethan-Coatings (LPU-Coatings) synthetisiert; diese LPU-Coatings zeigten bei Verwendung von bei höheren pH-Werten extrahierten KL homogenere, hydrophobe Oberflächenbeschaffenheit sowie gute thermische Stabilität. Zusätzliche Modifizierung der KL durch Demethylierung führte wegen der gesteigerten Anzahl freier Hydroxygruppen zu moderater Reaktivitätssteigerung und damit zu weiterer Verbesserung der Oberflächeneigenschaften hinsichtlich einer homogenen Oberflächenstruktur und -brillanz. Im Hinblick auf den Aspekt der Nachhaltigkeit wurden durch Syntheseoptimierung - bestehend aus Einstellung der Rohstoff-Korngröße, Ultraschallbehandlung und Verwendung des kommerziellen trifunktionellen Polyetherpolyols Lupranol® 3300 in Kombination mit Desmodur® L75 - die Löslichkeit von Lignin im Polyol sowie die thermische Stabilität der LPU-Coatings erhöht. Im Zuge der Syntheseoptimierungen konnte durch verkürzte Trocknungszeiten Energieeinsparung erzielt werden; zudem ließen sich dabei die eingesetzten Mengen kommerziell erhältlicher Chemikalien verringern; beide Einsparungen führten zu Kostenreduktion. Zugleich ließ sich so nicht nur der KL-Anteil im Polymer-Coating erhöhen: Durch eine optimierte wirtschaftliche Einstufensynthese ließ sich die Umsetzung dieser Vorgehensweise auch im Rahmen industrieller Anwendungen vereinfachen. Das Einbetten ausgewählter TPM-Farbstoffe (Kristallviolett und Brilliantgrün) in die LPU-Coatings durch blend formation führte nachweislich zu antimikrobieller Wirkung der Oberflächenbeschichtung, ohne dass die Oberflächenbeschaffenheit an Homogenität verlor. Die im Rahmen dieser Arbeit synthetisierten LPU-Coatings könnten zukünftig als Korrosionsschutz- und antimikrobielle-Beschichtungen ihre Anwendung finden, z. B. in der Landwirtschaft und im Bausektor.
Die im Rahmen der vorliegenden Arbeit gewonnen Erkenntnisse liefern einen Beitrag zur strukturellen Aufklärung des komplexen Biopolymers Lignin. Darüber hinaus stellen die Untersuchungen und Ergebnisse eine Grundlage für eine nachhaltige Herstellung von Lignin-basierten Polymerbeschichtungen dar, die in Zukunft immer mehr an Bedeutung gewinnen werden.
Im Rahmen der vorliegenden wissenschaftlichen Arbeit wurde das Potenzial der einfachen Halbleitergassensoren zum Einsatz in komplexen Fragestellungen erforscht. Ein im wahrsten Sinne des Wortes brandaktuelles Thema, das hier in den Fokus geraten ist, ist die Detektion explosionsfähiger Substanzen. 42547 – so hoch war die Anzahl der Terroranschläge im Zeitraum 2000 bis 2016, die unter Einsatz von energetischen Materialien begangen wurden. Bei mehr als der Hälfte waren Menschenopfer zu beklagen. Terrorismus ist eine Gefahr und neue explosionsfähige Stoffmischungen, deren Analysedaten in keiner Datenbank eines Detektors enthalten sind, bilden zurzeit ein enormes Bedrohungspotential - solche Gefahrstoffe sind mit etablierten bibliothekgestützten Verfahren schwer nachweisbar. In dieser Arbeit wurde ein bibliothekfrei arbeitender Detektor entwickelt, der schnell und verlässlich die Explosionsfähigkeit unbekannter Substanzen anhand der Auswertung ihrer Reaktionsverläufe bewerten konnte. Es wurde gezeigt, dass der Einsatz von Halbleitergassensoren in Kombination mit Photodioden und einem Drucksensor unter Voraussetzung der durchdachten Reaktionsführung und Anwendung von auf die Aufgabenstellung zugeschnittenen Auswertealgorithmen zielführend ist und eine extrem hohe Detektionsrate von 99,8% ermöglicht. Des Weiteren wurde ein einfacher Herstellungsweg für Halbleitergassensoren ausgehend von der vorhandenen Precursorbibliothek gefunden, der in Zukunft gezielte Manipulation der sensorischen Eigenschaften der Halbleitergassensoren durch Variieren des eingesetzten Precursors sowie der Sensorherstellungsparameter erlaubt. Die auf diesem Weg gefertigten Sensoren wurden in den entwickelten Detektor integriert und zeigten großes Potential neben bibliothekfreier Einschätzung der Explosionsfähigkeit einer unbekannten Substanz auch Aussagen über deren Identität treffen zu können.
In recent years, eXtended Reality (XR) technology like Augmented Reality and Virtual Reality became both technically feasible as well as affordable which lead to a drastic demand of professionally designed and developed applications. However, this demand combined with a rapid pace of innovation revealed a lack of design tool support for professional interaction designers as well as a knowledge gap regarding their approaches and needs. To address this gap, this thesis engages with the work of professional XR interaction designers in a qualitative research into XR interaction design approach. Therefore, this thesis applies two complementary lenses stemming from scientific design and social practice theory discourses to observe, describe, analyze, and understand professional XR interaction designers' challenges and approaches with a focus on application prototyping.
In dieser Arbeit werden neuartige methodische Erweiterungen der Lattice-Boltzmann-Methode (LBM) entwickelt, die effizientere Simulationen inkompressibler Wirbelströmungen ermöglichen. Diese Erweiterungen beheben zwei Hauptprobleme der Standard-LBM: ihre Instabilität in unteraufgelösten turbulenten Simulationen und ihre Beschränkung auf reguläre Rechengitter. Dazu wird zunächst eine Pseudo-Entropische Stabilisierung (PES) entwickelt. Diese kombiniert Ansätze der Multiple-Relaxation-Time (MRT)-Modelle und der Entropischen LBM zu einem expliziten, lokalen und flexiblen Stabilisierungsoperator. Diese Modifikation des Kollisionsschritts erlaubt selbst auf stark unteraufgelösten Gittern stabile und qualitativ korrekte Simulationen. Zur Erweiterung der LBM auf irreguläre Rechengitter wird zunächst eine moderne Discontinuous-Galerkin-LBM untersucht und um stabilere Zeitintegratoren ergänzt. Diese Studie demonstriert die drastischen Schwächen existierender LBMAnsätze auf irregulären Gittern. Basierend auf den gewonnenen Erkenntnissen gelingt die Formulierung einer neuartigen Semi-Lagrangeschen LBM (SLLBM). Diese ermöglicht in einzigartigerWeise sowohl die Verwendung irregulärer Gitter und großer Zeitschritte als auch eine hohe räumliche Konvergenzordnung. Anhand von Beispielsimulationen wird demonstriert, wieso dieser Ansatz anderen aktuellen Off-Lattice-Boltzmann-Methoden (OLBMs) in Effizienz und Genauigkeit überlegen ist. Weitere neuartige Aspekte dieser Arbeit sind die Entwicklung eines modularen Off-Lattice-Boltzmann-Codes und die Erweiterung der LBM um implizite Mehrschrittverfahren, mit denen eine Erhöhung der zeitlichen Konvergenzordnung gelingt.
Pseudopotential (PP)-basierte Lattice-Boltzmann-Methoden werden zunehmend für die Simulation von Mehrphasenströmungen eingesetzt. Da sie auf einem phänomenologischen Ansatz basieren, ist ihr Einsatz mit einem hohen Modellierungsaufwand verbunden. Zudem entstehen an den Phasengrenzen sogenannte Scheingeschwindigkeiten, welche Genauigkeit und numerische Stabilität beeinträchtigen. Daher werden PP-Modelle in dieser Arbeit um drei neue Aspekte erweitert. Erstens wird gezeigt, dass bei der Modellierung unterschiedlicher Kontaktwinkel mit gängigen Methoden in Kombination mit verbesserten Kräfteschemata Scheintröpfchen entstehen. Diese werden durch einen neuartigen Ansatz eliminiert, der auf zusätzlichen Randbedingungen für alle Wechselwirkungskräfte basiert. Diese Technik verhindert nicht nur das Auftreten der Scheintröpfchen, sondern erhöht auch die Stabilität in wandgebundenen Strömungen. Zweitens wird ein neuartiges Verfahren zur Reduktion von Scheingeschwindigkeiten eingeführt. Dabei wird die Diskretisierung der Interaktionskräfte erweitert und die zusätzlichen, freien Koeffizienten in Simulationen statischer Tropfen numerisch optimiert. Die resultierende Diskretisierung wurde in Simulationen stationärer und dynamischer Testfälle validiert, wobei Scheingeschwindigkeiten deutlich reduziert werden konnten. Drittens und letztens wurden die Diffusionseigenschaften in Mehrstoffsystemen detailliert untersucht, wobei eine kritische Abhängigkeit zwischen den makroskopischen Diffusionskoeffizienten und dem Kräfteschema aufgezeigt wird. Diese Analyse bildet die Grundlage für den Vergleich und die zukünftige Entwicklung neuer Potentialfunktionen (für Mehrstoffsysteme) und reduziert den Modellierungsaufwand.
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to external faults which occur because of an infinite number of unforeseeable and unmodelled situations. Through investigating the most frequent failures in typical scenarios which have been observed in real-world demonstrations and competitions using the autonomous service robots Care-O-Bot III and youBot, we identified four different fault classes caused by disturbances, imperfect perception, inadequate planning operator or chaining of action sequences. This thesis then presents two approaches to handle external faults caused by insufficient knowledge about the preconditions of the planning operator. The first approach presents reasoning on detected external faults using knowledge about naive physics. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties in order to reason. By interpreting the reasoning results the robot identifies the information about the situations which can cause the fault. Applying this approach to simple manipulation tasks like picking and placing objects show that naive physics holds great possibilities for reasoning on unknown external faults in robotics. The second approach includes missing knowledge about the execution of an action through learning by experimentation. Firstly, it investigates such representation of execution specific knowledge that can be learned for one particular situation and reused for situations which deviate from the original. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. This representation is called action execution model (AEM) here. The approach provides a learning strategy which uses a physical simulation for generating the training data to learn both symbolic and geometric aspects of the model. The experimental analysis, performed on two physical robots, shows that AEM can reliably describe execution specific knowledge and thereby serving as a potential model for avoiding the occurrence of external faults.
Western consumption patterns are strongly associated with environmental pollution and climate change, which challenges us with transforming our society and consumption towards a sustainable future. This thesis takes up this challenge and aims to contribute to this debate at the intersection of ICT artifacts and social practices through the examples of food and mobility consumption. The social practice lens is employed as an alternative to the predominant persuasive or motivational lens of design in the respective consumption domains. Against this background, this thesis first presents three research papers that contribute to a broader understanding of dynamic practices and their transformation towards a sustainable stable state. The following research takes up these sections' empirical results that more intensely focus on the appropriation of materials and infrastructures utilizing Recommender Systems. Given this approach, this thesis contributes to three fields - practice-based Computing, Recommender Systems, and Consumer Informatics.
Eine Überprüfung der Leistungsentwicklung im Radsport geht bis heute mit der Durchführung einer spezifischen Leistungsdiagnostik unter Verwendung vorgegebener Testprotokolle einher. Durch die zwischenzeitlich stark gestiegene Popularität von »wearable devices« ist es gleichzeitig heutzutage sehr einfach, die Herzfrequenz im Alltag und bei sportlichen Aktivitäten aufzuzeichnen. Doch eine geeignete Modellierung der Herzfrequenz, die es ermöglicht, Rückschlüsse über die Leistungsentwicklung ziehen zu können, fehlt bislang. Die Herzfrequenzaufzeichnungen in Kombination mit einer phänomenologisch interpretierbaren Modellierung zu nutzen, um auf möglichst direkte Weise und ohne spezifische Anforderungen an die Trainingsfahrten Rückschlüsse über die Leistungsentwicklung ziehen zu können, bietet die Chance, sowohl im professionellen Radsport wie auch in der ambitionierten Radsportpraxis den Erkenntnisgewinn über die eigene Leistungsentwicklung maßgeblich zu vereinfachen. In der vorliegenden Arbeit wird ein neuartiges und phänomenologisch interpretierbares Modell zur Simulation und Prädiktion der Herzfrequenz beim Radsport vorgestellt und im Rahmen einer empirischen Studie validiert. Dieses Modell ermöglicht es, die Herzfrequenz (sowie andere Beanspruchungsparameter aus Atemgasanalysen) mit adäquater Genauigkeit zu simulieren und bei vorgegebener Wattbelastung zu prognostizieren. Weiterhin wird eine Methode zur Reduktion der Anzahl der kalibrierbaren freien Modellparameter vorgestellt und in zwei empirischen Studien validiert. Nach einer individualisierten Parameterreduktion kann das Modell mit lediglich einem einzigen freien Parameter verwendet werden. Dieser verbleibende freie Parameter bietet schließlich die Möglichkeit, im zeitlichen Verlauf mit dem Verlauf der Leistungsentwicklung verglichen zu werden. In zwei unterschiedlichen Studien zeigt sich, dass der freie Modellparameter grundsätzlich in der Lage zu sein scheint, den Verlauf der Leistungsentwicklung über die Zeit abzubilden.
This thesis explores novel haptic user interfaces for touchscreens, virtual and remote environments (VE and RE). All feedback modalities have been designed to study performance and perception while focusing on integrating an additional sensory channel - the sense of touch. Related work has shown that tactile stimuli can increase performance and usability when interacting with a touchscreen. It was also shown that perceptual aspects in virtual environments could be improved by haptic feedback. Motivated by previous findings, this thesis examines the versatility of haptic feedback approaches. For this purpose, five haptic interfaces from two application areas are presented. Research methods from prototyping and experimental design are discussed and applied. These methods are used to create and evaluate the interfaces; therefore, seven experiments have been performed. All five prototypes use a unique feedback approach. While three haptic user interfaces designed for touchscreen interaction address the fingers, two interfaces developed for VE and RE target the feet. Within touchscreen interaction, an actuated touchscreen is presented, and study shows the limits and perceptibility of geometric shapes. The combination of elastic materials and a touchscreen is examined with the second interface. A psychophysical study has been conducted to highlight the potentials of the interface. The back of a smartphone is used for haptic feedback in the third prototype. Besides a psychophysical study, it is found that the touch accuracy could be increased. Interfaces presented in the second application area also highlight the versatility of haptic feedback. The sides of the feet are stimulated in the first prototype. They are used to provide proximity information of remote environments sensed by a telepresence robot. In a study, it was found that spatial awareness could be increased. Finally, the soles of the feet are stimulated. A designed foot platform that provides several feedback modalities shows that self-motion perception can be increased.
The globalisation and the increasing international trade have raised the number and risk of introduction of foreign species and invasive pests for years. Although native species have adapted to the native habitat over many years and generations, invasive intruders often possess characteristics that are superior to native species. Thus, and because of a lack of natural enemies, they bear the potential of decimation or complete displacement of the native species; furthermore, the introduction of pathogens or nematodes as a vector possesses a high damage potential. The available measures of the local plant protection services to combat invasive species are confined. They are limited to the felling of infested trees or plants and regular controls within the infested area. A spread of single infestations can thereby be prevented, but undetected infestations can unimpededly spread, which points out the main challenge: the detection of the species. This concerns the infestation in open land as well as the single animal on its path of introduction. Concerning the development of new adequate detection systems for invasive species, there is only little research activity going on. For other fields like detection of explosives or narcotics, the research activities date back for more than one decade and consequently there are detection systems available, which are, for example, used for explosive detection in airports. The detection principle bases on the chemistry of these substances.
This research investigates the efficacy of multisensory cues for locating targets in Augmented Reality (AR). Sensory constraints can impair perception and attention in AR, leading to reduced performance due to factors such as conflicting visual cues or a restricted field of view. To address these limitations, the research proposes head-based multisensory guidance methods that leverage audio-tactile cues to direct users' attention towards target locations. The research findings demonstrate that this approach can effectively reduce the influence of sensory constraints, resulting in improved search performance in AR. Additionally, the thesis discusses the limitations of the proposed methods and provides recommendations for future research.
The knowledge of Software Features (SFs) is vital for software developers and requirements specialists during all software engineering phases: to understand and derive software requirements, to plan and prioritize implementation tasks, to update documentation, or to test whether the final product correctly implements the requested SF. In most software projects, SFs are managed in conjunction with other information such as bug reports, programming tasks, or refactoring tasks with the aid of Issue Tracking Systems (ITSs). Hence ITSs contains a variety of information that is only partly related to SFs. In practice, however, the usage of ITSs to store SFs comes with two major problems: (1) ITSs are neither designed nor used as documentation systems. Therefore, the data inside an ITS is often uncategorized and SF descriptions are concealed in rather lengthy. (2) Although an SF is often requested in a single sentence, related information can be scattered among many issues. E.g. implementation tasks related to an SF are often reported in additional issues. Hence, the detection of SFs in ITSs is complicated: a manual search for the SFs implies reading, understanding and exploiting the Natural Language (NL) in many issues in detail. This is cumbersome and labor intensive, especially if related information is spread over more than one issue. This thesis investigates whether SF detection can be supported automatically. First the problem is analyzed: (i) An empirical study shows that requests for important SFs reside in ITSs, making ITSs a good tar- get for SF detection. (ii) A second study identifies characteristics of the information and related NL in issues. These characteristics repre- sent opportunities as well as challenges for the automatic detection of SFs. Based on these problem studies, the Issue Tracking Software Feature Detection Method (ITSoFD), is proposed. The method has two main components and includes an approach to preprocess issues. Both components address one of the problems associated with storing SFs in ITSs. ITSoFD is validated in three solution studies: (I) An empirical study researches how NL that describes SFs can be detected with techniques from Natural Language Processing (NLP) and Machine Learning. Issues are parsed and different characteristics of the issue and its NL are extracted. These characteristics are used to clas- sify the issue’s content and identify SF description candidates, thereby approaching problem (1). (II) An empirical study researches how issues that carry information potentially related to an SF can be detected with techniques from NLP and Information Retrieval. Characteristics of the issue’s NL are utilized to create a traceability network vii of related issues, thereby approaching problem (2). (III) An empirical study researches how NL data in issues can be preprocessed using heuristics and hierarchical clustering. Code, stack traces, and other technical information is separated from NL. Heuristics are used to identify candidates for technical information and clustering improves the heuristic’s results. The technique can be applied to support components, I. and II.
Traditional and newly developed testing methods were used for extensive application-related characterization of transdermal therapeutic systems (TTS) and pressure sensitive adhesives (PSA). Large amplitude oscillatory shear tests of PSAs were correlated to the material behavior during the patient’s motion and showed that all PSAs were located close to the gel point. Furthermore, an increasing strain amplitude results in stretching and yielding of the PSA´s microstructure causing a consolidation of the network and a release with increasing strain amplitude. RheoTack approach was developed to allow for an advanced tack characterization of TTS with visual inspection. The results showed a clear resin content and rod geometry dependent behavior, and displays the PSA´s viscoelasticity resulting in either high tack and long stretched fibrils or non-adhesion and brittle behavior. Moreover, diffusion of water / sweat during TTS´s application might influence its performance. Therefore, a dielectric analysis based evaluation method displayed occurring water diffusion into the PSA from which the diffusion coefficient can be determined, and showed clear material and resin content dependent behavior. All methods allow for an advanced product-oriented material testing that can be utilized within further TTS development.
Skill generalisation and experience acquisition for predicting and avoiding execution failures
(2023)
For performing tasks in their target environments, autonomous robots usually execute and combine skills. Robot skills in general and learning-based skills in particular are usually designed so that flexible skill acquisition is possible, but without an explicit consideration of execution failures, the impact that failure analysis can have on the skill learning process, or the benefits of introspection for effective coexistence with humans. Particularly in human-centered environments, the ability to understand, explain, and appropriately react to failures can affect a robot's trustworthiness and, consequently, its overall acceptability. Thus, in this dissertation, we study the questions of how parameterised skills can be designed so that execution-level decisions are associated with semantic knowledge about the execution process, and how such knowledge can be utilised for avoiding and analysing execution failures. The first major segment of this work is dedicated to developing a representation for skill parameterisation whose objective is to improve the transparency of the skill parameterisation process and enable a semantic analysis of execution failures. We particularly develop a hybrid learning-based representation for parameterising skills, called an execution model, which combines qualitative success preconditions with a function that maps parameters to predicted execution success. The second major part of this work focuses on applications of the execution model representation to address different types of execution failures. We first present a diagnosis algorithm that, given parameters that have resulted in a failure, finds a failure hypothesis by searching for violations of the qualitative model, as well as an experience correction algorithm that uses the found hypothesis to identify parameters that are likely to correct the failure. Furthermore, we present an extension of execution models that allows multiple qualitative execution contexts to be considered so that context-specific execution failures can be avoided. Finally, to enable the avoidance of model generalisation failures, we propose an adaptive ontology-assisted strategy for execution model generalisation between object categories that aims to combine the benefits of model-based and data-driven methods; for this, information about category similarities as encoded in an ontology is integrated with outcomes of model generalisation attempts performed by a robot. The proposed methods are exemplified in terms of various use cases - object and handle grasping, object stowing, pulling, and hand-over - and evaluated in multiple experiments performed with a physical robot. The main contributions of this work include a formalisation of the skill parameterisation problem by considering execution failures as an integral part of the skill design and learning process, a demonstration of how a hybrid representation for parameterising skills can contribute towards improving the introspective properties of robot skills, as well as an extensive evaluation of the proposed methods in various experiments. We believe that this work constitutes a small first step towards more failure-aware robots that are suitable to be used in human-centered environments.
For a sustainable development the electricity sector needs to be decarbonized. In 2017 only 54% of the West African households had access to the electrical grid. Thus, renewable sources should play a major role for the development of the power sector in West Africa. Above all, solar power shows highest potential of renewable energy sources. However, it is highly variable, depending on the atmospheric conditions. This study addresses the challenges for a solar based power system in West Africa by analyzing the atmospheric variability of solar power. For this purpose, two aspects are investigated. In the first part, the daily power reduction due to atmospheric aerosols is quantified for different solar power technologies. Meteorological data at six ground-based stations is used to model photovoltaic and parabolic trough power during all mostly clear-sky days in 2006. A radiative transfer model is combined with solar power model. The results show, that the reduction due to aerosols can be up to 79% for photovoltaic and up to 100% for parabolic trough power plants during a major dust outbreak. Frequent dust outbreaks occurring in West Africa would cause frequent blackouts if sufficient storage capacities are not available. On average, aerosols reduce the daily power yields by 13% to 22% for photovoltaic and by 22% to 37% for parabolic troughs. For the second part, long-term atmospheric variability and trends of solar irradiance are analyzed and their impact on photovoltaic yields is examined for West Africa. Based on a 35-year satellite data record (1983 - 2017) the temporal and spatial variability and general trend are depicted for global and direct horizontal irradiances. Furthermore, photovoltaic yields are calculated on a daily basis. They show a strong meridional gradient with highest values of 5 kWh/kWp in the Sahara and Sahel zone and lowest values in southern West Africa (around 4 kWh/kWp). Thereby, the temporal variability is highest in southern West Africa (up to around 18%) and lowest in the Sahara (around 4.5%). This implies the need of a North-South grid development, to feed the increasing demand on the highly populated coast by solar power from the northern parts of West Africa. Additionally, global irradiances show a long-term positive trend (up to +5 W/m²/decade) in the Sahara and a negative trend (up to -5 W/m²/decade) in southern West Africa. If this trend is continuing, the spatial differences in solar power potential will increase in the future. This thesis provides a better understanding of the impact of atmospheric variability on solar power in a challenging environment like West Africa, characterized by the strong influence of the African monsoon. Thereby, the importance of aerosols is pointed out. Furthermore, long-term changes of irradiance are characterized concerning their implications for photovoltaic power.
With the digital transformation, software systems have become an integral part of our society and economy. In every part of our life, software systems are increasingly utilized to, e.g., simplify housework or to optimize business processes. All these applications are connected to the Internet, which already includes millions of software services consumed by billions of people. Applications which process such a magnitude of users and data traffic requires to be highly scalable and are therefore denoted as Ultra Large Scale (ULS) systems. Roy Fielding has defined one of the first approaches which allows designing modern ULS software systems. In his doctoral thesis, Fielding introduced the architectural style Representational State Transfer (REST) which builds the theoretical foundation of the web. At present, the web is considered as the world's largest ULS system. Due to a large number of users and the significance of software for society and the economy, the security of ULS systems is another crucial quality factor besides high scalability.
Since its advent, the sustainability effects of the modern sharing economy have been the subject of controversial debate. While its potential was initially discussed in terms of post-ownership development with a view to decentralizing value creation and increasing social capital and environmental relief through better utilization of material goods, critics have become increasingly loud in recent years. Many people hoped that carsharing could lead to development away from ownership towards flexible use and thus more resource-efficient mobility. However, carsharing remains niche, and while many people like the idea in general, they appear to consider carsharing to not be advantageous as a means of transport in terms of cost, flexibility, and comfort. A key innovation that could elevate carsharing from its niche existence in the future is autonomous driving. This technology could help shared mobility gain a new boost by allowing it to overcome the weaknesses of the present carsharing business model. Flexibility and comfort could be greatly enhanced with shared autonomous vehicles (SAVs), which could simultaneously offer benefits in terms of low cost, and better use of time without the burden of vehicle ownership. However, it is not the technology itself that is sustainable; rather, sustainability depends on the way in which this technology is used. Hence, it is necessary to make a prospective assessment of the direct and indirect (un)sustainable effects before or during the development of a technology in order to incorporate these findings into the design and decision-making process. Transport research has been intensively analyzing the possible economic, social, and ecological consequences of autonomous driving for several years. However, research lacks knowledge about the consequences to be expected from shared autonomous vehicles. Moreover, previous findings are mostly based on the knowledge of experts, while potential users are rarely included in the research. To address this gap, this thesis contributes to answering the questions of what the ecological and social impacts of the expected concept of SAVs will be. In my thesis, I study in particular the ecological consequences of SAVs in terms of the potential modal shifts they can induce as well as their social consequences in terms of potential job losses in the taxi industry. Regarding this, I apply a user-oriented, mixed-method technology assessment approach that complements existing, expert-oriented technology assessment studies on autonomous driving that have so far been dominated by scenario analyses and simulations. To answer the two questions, I triangulated the method of scenario analysis and qualitative and quantitative user studies. The empirical studies provide evidence that the automation of mobility services such as carsharing may to a small extent foster a shift from the private vehicle towards mobility on demand. However, findings also indicate that rebound effects are to be expected: Significantly more users are expected to move away from the more sustainable public transportation, leading to an overcompensation of the positive modal shift effects by the negative modal shift effects. The results show that a large proportion of the taxi trips carried out can be re-placed by SAVs, making the profession of taxi driver somewhat obsolete. However, interviews with taxi drivers themselves revealed that the services provided by the drivers go beyond mere transport, so that even in the age of SAVs, the need for human assistance will continue – though to a smaller extent. Given these findings, I see action potential at different levels: users, mobility service providers, and policymakers. Regarding environmental and social impacts resulting from the use of SAVs, there is a strong conflict of objectives among users, potential SAV operators, and sustainable environmental and social policies. In order to strengthen the positive effects and counteract the negative effects, such as unintended modal shifts, policies may soon have to regulate the design of SAVs and their introduction. A key starting point for transport policy is to promote the use of more environmentally friendly means of transport, in particular by making public transportation attractive and, if necessary, by making the use of individual motorized mobility less attractive. The taxi industry must face the challenges of automation by opening up to these developments and focusing on service orientation – to strengthen the drivers’ main unique selling point compared to automated technology. Assessing the impacts of the not-yet-existing generally involves great uncertainty. With the results of my work, however, I would like to argue that a user-oriented technology assessment can usefully complement the findings of classic methods of technology assessment and can iteratively inform the development process regarding technology and regulation.
Evaluation and Optimization of IEEE802.11 multi-hop Backhaul Networks with Directional Antennas
(2020)
A major problem for rural areas is the inaccessibility to affordable broadband Internet connections. In these areas distances are large, and digging a cable into the ground is extremely expensive, considering the small number of potential customers at the end of that cable. This leads to a digital divide, where urban areas enjoy a high-quality service at low cost, while rural areas suffer from the reverse.
This work is dedicated to an alternative technical approach aiming to reduce the cost for Internet Service Provider in rural areas: WiFi-based Long Distance networks. A set of significant contributions of technology related aspects of WiFi-based Long Distance networks is described in three different fields: Propagation on long distance Wi-Fi links, MAC-layer scheduling and Interference modeling and Channel Assignment with directional antennas.
For each field, the author composes and discusses the state-of-the-art. Afterwards, the author derives research questions and tackles several open issues to develop these kinds of networks further towards a suitable technology for the backhaul segment.
Process-dependent thermo-mechanical viscoelastic properties and the corresponding morphology of HDPE extrusion blow molded (EBM) parts were investigated. Evaluation of bulk data showed that flow direction, draw ratio, and mold temperature influence the viscoelastic behavior significantly in certain temperature ranges. Flow induced orientations due to higher draw ratio and higher mold temperature lead to higher crystallinities. To determine the local viscoelastic properties, a new microindentation system was developed by merging indentation with dynamic mechanical analysis. The local process-structure-property relationship of EBM parts showed that the cross-sectional temperature distribution is clearly reflected by local crystallinities and local complex moduli. Additionally, a model to calculate three-dimensional anisotropic coefficients of thermal expansion as a function of the process dependent crystallinity was developed based on an elementary volume unit cell with stacked layers of amorphous phase and crystalline lamellae. Good agreement of the predicted thermal expansion coefficients with measured ones was found up to a temperature of 70 °C.
Despite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints.
Lignin ist ein aromatisches Biopolymer, das in den Zellwänden von Pflanzen vorkommt. Es ist hauptsächlich aus drei sogenannten Monolignolen (p-Hydroxyphenyl (H), Guajakol (G) und Syringol (S)) aufgebaut, die über verschiedene Bindungen miteinander verknüpft sein können, und enthält eine Vielzahl an funktionellen Gruppen. Interessant für die Verwendung von Lignin sind dabei insbesondere die vielen phenolischen Hydroxygruppen, die als Ausgangsstoff bei der Synthese neuer Produkte dienen können, daneben aber auch für seine antioxidativen Eigenschaften verantwortlich sind. Da Struktur und Eigenschaften von vielen Faktoren wie Biomasse und Aufschlussprozess abhängen, ist eine detaillierte Charakterisierung der Lignine nötig, um Struktur-Eigenschafts-Beziehungen aufzuklären und so einen Schritt näher an eine mögliche stoffliche Nutzung zu kommen. Mit dieser Arbeit soll der Einfluss der Biomasse inklusive der verwendeten Partikelgröße sowie des Organosolv-Aufschlussprozesses auf die Monomerzusammensetzung, das Molekulargewicht und die Antioxidanz der isolierten Lignine untersucht werden.
Als Rohstoffe zur Ligningewinnung dienen die drei mehrjährigen lignocellulosereichen Low-Input-Pflanzen Miscanthus x giganteus, Silphium perfoliatum und Paulownia tomentosa, die momentan hauptsächlich zur Energiegewinnung genutzt werden. Im Rahmen der Bioökonomiestrategie der Europäischen Union soll der Schwerpunkt zukünftiger Bioraffinerien jedoch auf eine ganzheitliche Nutzung von Biomassen gelegt und so auch die stoffliche Nutzung fokussiert werden. Zusätzlich zu diesen drei Pflanzen werden auch Organosolv-Lignine aus den in der Literatur bereits gut beschriebenen Biomassen Weizenstroh und Buchenholz isoliert, und zwei Nadelholz-Kraft-Lignine als Vergleich herangezogen. Die Ergebnisse zeigen, dass die Art der Biomasse hauptsächlich die Monomerzusammensetzung beeinflusst: Gräser bestehen aus allen drei Monolignolen, Laubhölzer mehrheitlich aus S- und G-Einheiten, während Nadelhölzer nur aus G-Einheiten aufgebaut sind. Die Holzlignine besitzen zudem höhere Molekulargewichte sowie bessere antioxidative Eigenschaften als die Gras- und Krautlignine. Mit der feineren Vermahlung der Biomasse kann die Monomerzusammensetzung beeinflusst werden: der Einsatz kleinerer Partikelgrößen führt zu Ligninen mit einem höheren Gehalt an H-Einheiten, sowohl für Miscanthus als auch für Paulownia. Außerdem kann bei Paulownia die Ausbeute gesteigert und eine Zunahme des Molekulargewichtes beobachtet werden, wenn die kleinste Siebfraktion für den Organosolv-Aufschluss verwendet wird. Einen größeren Einfluss als der Mahlgrad der Biomasse haben die Autohydrolyse sowie der Organosolv-Aufschlussprozess selbst. Die Monomerzusammensetzung ändert sich aufgrund derselben Biomasse zwar kaum, die Bindungstypen zwischen den Monolignolen dagegen schon. Mit höherer Prozessstärke (Zeit, Temperatur, Ethanol-Konzentration) werden Etherbindungen gespalten, was den Anteil an phenolischen Hydroxygruppen und somit die Antioxidanz erhöht. Neben dieser Depolymerisation werden partiell auch Rekondensationsreaktionen beobachtet.
Die erzielten Ergebnisse liefern einen Beitrag zum Verständnis des Zusammenhangs zwischen Ligninquelle und -gewinnung mit der daraus resultierenden Ligninstruktur und Antioxidanz und bieten damit eine Grundlage für den Wandel von der energetischen hin zu einer nachhaltigen stofflichen Nutzung dieses nachwachsenden Biopolymers. Gerade über die Wahl der Aufschlussparameter können Struktur und Antioxidanz gezielt beeinflusst werden, was in zukünftigen Studien weiter fokussiert werden sollte.