Refine
H-BRS Bibliography
- yes (252) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (68)
- Fachbereich Wirtschaftswissenschaften (58)
- Fachbereich Ingenieurwissenschaften und Kommunikation (43)
- Fachbereich Sozialpolitik und Soziale Sicherung (38)
- Fachbereich Angewandte Naturwissenschaften (36)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (34)
- Graduierteninstitut (15)
- Institut für Verbraucherinformatik (IVI) (14)
- Institute of Visual Computing (IVC) (14)
- Institut für funktionale Gen-Analytik (IFGA) (7)
Document Type
- Article (78)
- Conference Object (61)
- Part of a Book (40)
- Book (monograph, edited volume) (22)
- Doctoral Thesis (15)
- Preprint (15)
- Contribution to a Periodical (6)
- Report (5)
- Research Data (4)
- Master's Thesis (2)
- Working Paper (2)
- Bachelor Thesis (1)
- Video (1)
Year of publication
- 2020 (252) (remove)
Has Fulltext
- no (252) (remove)
Keywords
- Digitalisierung (3)
- Lehrbuch (3)
- Quality diversity (3)
- post-buckling (3)
- ARIMA (2)
- Artificial Intelligence (2)
- Autoencoder (2)
- Automatic Short Answer Grading (2)
- Bayesian optimization (2)
- Computational fluid dynamics (2)
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Due to the use of fossil fuel resources, many environmental problems have been increasingly growing. Thus, the recent research focuses on the use of environment friendly materials from sustainable feedstocks for future fuels, chemicals, fibers and polymers. Lignocellulosic biomass has become the raw material of choice for these new materials. Recently, the research has focused on using lignin as a substitute material in many industrial applications. The antiradical and antimicrobial activity of lignin and lignin-based films are both of great interest for applications such as food packaging additives. DPPH assay was used to determine the antioxidant activity of Kraft lignin compared to Organosolv lignins from different biomasses. The purification procedure of Kraft lignin showed that double-fold selective extraction is the most efficient confirmed by UV-Vis, FTIR, HSQC, 31PNMR, SEC, and XRD. The antioxidant capacity was discussed regarding the biomass source, pulping process, and degree of purification. Lignin obtained from industrial black liquor are compared with beech wood samples: Biomass source influences the DPPH inhibition (softwood > grass) and the TPC (softwood < grass). DPPH inhibition affected by the polarity of the extraction solvent. Following the trend: ethanol > diethylether > acetone. Reduced polydispersity has positive influence on the DPPH inhibition. Storage decreased the DPPH inhibition but increased the TPC values. The DPPH assay was also used to discuss the antiradical activity of HPMC/lignin and HPMC/lignin/chitosan films. In both binary (HPMC/lignin) and ternary (HPMC/lignin/chitosan) systems the 5% addition showed the highest activity and the highest addition had the lowest. Both scavenging activity and antimicrobial activity are dependent on the biomass source; Organosolv of softwood > Kraft of softwood > Organosolv of grass. Lignins and lignin-containing films showed high antimicrobial activities against Gram-positive and Gram-negative bacteria at 35 °C and at low temperatures (0-7 °C). Purification of Kraft lignin has a negative effect on the antimicrobial activity while storage has positive effect. The lignin leaching in the produced films affected the activity positively and the chitosan addition enhances the activity for both Gram-positive and Gram-negative bacteria. Testing the films against food spoilage bacteria that grow at low temperatures revealed the activity of the 30% addition on HPMC/L1 film against both B. thermosphacta and P. fluorescens while L5 was active only against B. thermosphacta. In HPMC/lignin/chitosan films, the 5% addition exhibited activity against both food spoilage bacteria.
In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
In this paper we introduce the Perception for Autonomous Systems (PAZ) software library. PAZ is a hierarchical perception library that allow users to manipulate multiple levels of abstraction in accordance to their requirements or skill level. More specifically, PAZ is divided into three hierarchical levels which we refer to as pipelines, processors, and backends. These abstractions allows users to compose functions in a hierarchical modular scheme that can be applied for preprocessing, data-augmentation, prediction and postprocessing of inputs and outputs of machine learning (ML) models. PAZ uses these abstractions to build reusable training and prediction pipelines for multiple robot perception tasks such as: 2D keypoint estimation, 2D object detection, 3D keypoint discovery, 6D pose estimation, emotion classification, face recognition, instance segmentation, and attention mechanisms.
The general method of topological reduction for the network problems is presented on example of gas transport networks. The method is based on a contraction of series, parallel and tree-like subgraphs for the element equations of quadratic, power law and general monotone dependencies. The method allows to reduce significantly the complexity of the graph and to accelerate the solution procedure for stationary network problems. The method has been tested on a large set of realistic network scenarios. Possible extensions of the method have been described, including triangulated element equations, continuation of the equations at infinity, providing uniqueness of solution, a choice of Newtonian stabilizer for nearly degenerated systems. The method is applicable for various sectors in the field of energetics, including gas networks, water networks, electric networks, as well as for coupling of different sectors.
This dataset contains data from two measurement campaigns in autumn 2018 and summer 2019 that were part of the BMWi project "MetPVNet", and serve as a supplement to the paper "Dynamic model of photovoltaic module temperature as a function of atmospheric conditions", published in the special edition of "Advances in Science and Research", the proceedings of the 19th EMS Annual Meeting: European Conference for Applied Meteorology and Climatology 2019.
Data are resampled to one minute, and include:
PV module temperature
Ambient temperature
Plane-of-array irradiance
Windspeed
Atmospheric thermal emission
The data were used for the dynamic temperature model, as presented in the paper
Abschlussbericht zum BMBF-Fördervorhaben Enabling Infrastructure for HPC-Applications (EI-HPC)
(2020)
Trust is the lubricant of the sharing economy. This is true especially in peer-to-peer carsharing, in which one leaves a highly valuable good to a stranger in the hope of getting it back unscathed. Nowadays, ratings of other users are major mechanisms for establishing trust. To foster uptake of peer-to-peer carsharing, connected car technology opens new possibilities to support trust-building, e.g., by adding driving behavior statistics to users' profiles. However, collecting such data intrudes into rentees' privacy. To explore the tension between the need for trust and privacy demands, we conducted three focus group and eight individual interviews. Our results show that connected car technologies can increase trust for car owners and rentees not only before but also during and after rentals. The design of such systems must allow a differentiation between information in terms of type, the context, and the negotiability of information disclosure.
Bei genauer Betrachtung heutiger Sharing Plattformen wie AirBnB, Uber, Drivy oder Fairleihen fällt auf, dass diese eines gemein haben. Als Plattformökonomien basieren sie auf mindestens zwei Nutzergruppen, Anbietern und Nachfragern für Güter oder Dienstleistungen. Ein Problem solcher zweioder mehrseitigen Märkte ist jedoch häufig, dass der Wertezuwachs, der durch die Nutzer generiert wird, nicht gleichmäßig unter der Plattform und den aktiven Nutzern verteilt wird, sondern meist ausschließlich als Gewinn an die Plattformen geht. Mit der Blockchain-Technologie könnte dieses Problem gelöst werden, indem der Informations- und Wertetransfer sicher und dezentral organisiert wird und viele Funktionen traditioneller Intermediäre dadurch obsolet werden. Diese Arbeit bietet einen Überblick über Anwendungsfelder und das Grundkonzept der Sharing Economy. Wir zeigen auf, wie sich Geschäftsmodelle und Infrastrukturen in einer Blockchain abbilden lassen, welche Potentiale eine Blockchain-basierte Infrastruktur bietet, wann diese in der Sharing Economy sinnvoll sein kann und welche Probleme dadurch gelöst werden können.
Coumarin as a structural component of substrates and probes for serine and cysteine proteases
(2020)
Im Rahmen dieser Forschungsarbeit wurde eine praxisorientierte Methode entwickelt, die es ermöglicht, Bodenproben nach ihrer Entnahme auf dem Feld aufzubereiten und hinsichtlich ihres Mikroplastikgehaltes analysieren zu können. Die Extraktionsmethode wurde bereits für zwei Polymere, PA 12 und PE (Mulchfolienpartikel), mit Wiederfindungsraten von je 100 % für Partikel größer als 0,5 mm validiert. Für Partikel größer als 63 μm liegt die Wiederfindungsrate für PE-Mulchfolienpartikel bei 97 % beziehungs-weise für PA-Partikel bei 86 %. Weiterhin wurden verschiedene spektroskopische Detektions-methoden untersucht und hinsichtlich ihrer Potentiale und Grenzen miteinander verglichen. Dabei wurde festgestellt, dass die Digitalmikroskopie zwar sehr gut geeignet ist, die Farbe, Größe, Form und Anzahl der Partikel zu bestimmen, jedoch stark von der subjektiven Einschätzung abhängig ist. Sie sollte daher in jedem Fall mit einer weiteren Detektionsmethode kombiniert werden. In dieser Arbeit wurde hierzu die ATR-FTIR-Spektroskopie verwendet. Diese ermöglicht zusätzlich die Bestimmung des Polymertyps einzelner Partikel mit einer unteren Nachweisgrenze von 500 μm. Die Methode konnte auf insgesamt fünf landwirtschaftlich genutzten Flächen angewendet werden, wovon zwei konventionell und drei ökologisch bewirtschaftet werden. Um einen ersten Eindruck über die aktuelle Mikroplastik-Belastung von Agrarböden zu erhalten, wurden die mit Hilfe der in dieser Forschungsarbeit entwickelten Methode erhaltenen Ergebnisse extrapoliert und als Emissionskoeffizienten in verschiedenen Einheiten angegeben.
The present thesis elucidates the development of (i) a series of small molecule inhibitors reacting in a covalent-irreversible manner with the targeted proteases and (ii) a fluorescently labeled activity-based probe as a pharmacological tool compound for investigation of specific functions of the mentioned enzymes in vitro. Herein, the rational design, organic synthesis and quantitative structure-activity-relationships are described extensively.
Validierung einer Web-Applikation zum Fern-Monitoring von Belastungs- und Erholungsparametern
(2020)
Simultan zur agilen Entwicklung einer Web-Applikation, die Parameter der Belastungs- und Beanspruchungssteuerung erfasst, wurden die implementierten Belastungs- und Erholungs-parameter an freiwilligen Testern/innen in der Praxis überprüft. Um sowohl die Applikation als auch die z.T. selbst entwickelten Kenngrößen auf ihre externe Validität hin zu bewerten, werden diese regressionsanalytisch bearbeitet.
OSC data
(2020)
Digital Business
(2020)
Digital Business behandelt die Besonderheiten digitaler Geschäftsmodelle, den Umgang mit Daten, erläutert die Funktionsweise digitaler Märkte und deren Auswirkungen auf Servicefunktionen wie HR, Kommunikation, Finanzierung und Marketing. Zudem werden wesentliche Erfolgsfaktoren wie agiles Management und Customer Experience behandelt. Insgesamt haben 30 Experten mit ihrem spezifischem Know How an der Erstellung des praxisorientierten Litello-eBook mitgearbeitet, dass sich auch gut als Basis für einschlägige Lehrveranstaltung anbietet.
Alkaline methanol oxidation is an important electrochemical process in the design of efficient fuel cells. Typically, a system of ordinary differential equations is used to model the kinetics of this process. The fitting of the parameters of the underlying mathematical model is performed on the basis of different types of experiments, characterizing the fuel cell. In this paper, we describe generic methods for creation of a mathematical model of electrochemical kinetics from a given reaction network, as well as for identification of parameters of this model. We also describe methods for model reduction, based on a combination of steady-state and dynamical descriptions of the process. The methods are tested on a range of experiments, including different concentrations of the reagents and different voltage range.
Digitale Güter
(2020)
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
Das Buch schlägt die Brücke zwischen den betriebswirtschaftlich-organisatorischen Methoden und deren digitaler Umsetzung, denn Prozessmanagement heißt zunehmend Gestaltung betrieblicher Aufgaben. Neben methodischen Grundlagen bietet das Werk viele Praxisbeispiele und Übungen. Das Buch von Prof. Gadatsch gilt mittlerweile als der "aktuelle Klassiker", DAS maßgebliche Standardwerk zur IT-gestützten Gestaltung von Geschäftsprozessen.
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer Grading
(2020)
Automatic Short Answer Grading (ASAG) is the process of grading the student answers by computational approaches given a question and the desired answer. Previous works implemented the methods of concept mapping, facet mapping, and some used the conventional word embeddings for extracting semantic features. They extracted multiple features manually to train on the corresponding datasets. We use pretrained embeddings of the transfer learning models, ELMo, BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a single feature, cosine similarity, extracted from the embeddings of these models. We compare the RMSE scores and correlation measurements of the four models with previous works on Mohler dataset. Our work demonstrates that ELMo outperformed the other three models. We also, briefly describe the four transfer learning models and conclude with the possible causes of poor results of transfer learning models.
Optimization plays an essential role in industrial design, but is not limited to minimization of a simple function, such as cost or strength. These tools are also used in conceptual phases, to better understand what is possible. To support this exploration we focus on Quality Diversity (QD) algorithms, which produce sets of varied, high performing solutions. These techniques often require the evaluation of millions of solutions -- making them impractical in design cases. In this thesis we propose methods to radically improve the data-efficiency of QD with machine learning, enabling its application to design. In our first contribution, we develop a method of modeling the performance of evolved neural networks used for control and design. The structures of these networks grow and change, making them difficult to model -- but with a new method we are able to estimate their performance based on their heredity, improving data-efficiency by several times. In our second contribution we combine model-based optimization with MAP-Elites, a QD algorithm. A model of performance is created from known designs, and MAP-Elites creates a new set of designs using this approximation. A subset of these designs are the evaluated to improve the model, and the process repeats. We show that this approach improves the efficiency of MAP-Elites by orders of magnitude. Our third contribution integrates generative models into MAP-Elites to learn domain specific encodings. A variational autoencoder is trained on the solutions produced by MAP-Elites, capturing the common “recipe” for high performance. This learned encoding can then be reused by other algorithms for rapid optimization, including MAP-Elites. Throughout this thesis, though the focus of our vision is design, we examine applications in other fields, such as robotics. These advances are not exclusive to design, but serve as foundational work on the integration of QD and machine learning.
The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is that this process can be automated by generating a dataset of high-performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Varia-tional Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions-but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinemat-ics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed.
The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.
Kommunikation gilt nicht ohne Grund als die Königsdisziplin im BGM. Hier gilt es, Mitarbeiter in einem ersten Schritt für das Thema Gesundheit zu sensibilisieren und mit relevanten Materialien zu informieren, um sie letztendlich zur Teilnahme an Gesundheitsangeboten zu motivieren. Diese drei Schritte empfehlen sich ebenfalls für die Kommunikation in digitalen Zeiten. Gesundheitsplattformen und/oder Gesundheits-Apps können die Kommunikation unterstützen. Das richtige Maß an Kommunikation stellt eine weitere Herausforderung in digitalen Zeiten dar, da Informationen in der Flut an E-Mails durchaus untergehen können. Eine Kombination aus Push- und Pull-Kommunikation hat sich hierbei bewährt, um bei Mitarbeitern das nötige Interesse für Gesundheit anzustoßen, damit diese dann eigenständig aus bestehenden Angeboten (Informationen, Kurse usw.) wählen.
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
In optimization methods that return diverse solution sets, three interpretations of diversity can be distinguished: multi-objective optimization which searches diversity in objective space, multimodal optimization which tries spreading out the solutions in genetic space, and quality diversity which performs diversity maintenance in phenotypic space. We introduce niching methods that provide more flexibility to the analysis of diversity and a simple domain to compare and provide insights about the paradigms. We show that multiobjective optimization does not always produce much diversity, quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions, and multimodal optimization produces higher fitness solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set. Finally, we make recommendations about when to use which approach.
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
Kollaborative Industrieroboter werden für produzierende Unternehmen immer kosteneffizienter. Während diese Systeme für den menschlichen Mitarbeiter eine große Hilfe sein können, stellen sie gleichzeitig ein ernstes Gesundheitsrisiko dar, wenn die zwingend notwendigen Sicherheitsmaßnahmen nur unzureichend umgesetzt werden. Herkömmliche Sicherheitseinrichtungen wie Zäune oder Lichtvorhänge bieten einen guten Schutz, aber solch statische Schutzvorrichtungen sind in neuen, hochdynamischen Arbeitsszenarien problematisch.
Im Forschungsprojekt BeyondSPAI wurde ein Funktionsmuster eines Multisensorsystems zur Absicherung solcher dynamischer Arbeitsszenarien entworfen, implementiert und im Feld getestet. Kern des Systems ist eine robuste optische Materialklassifikation, die mit Hilfe eines intelligenten InGaAs-Kamerasystems Haut von anderen typischen Werkstückoberflächen (z.B. Holz, Metalle od. Kunststoffe) unterscheiden kann. Diese einzigartige Eigenschaft wird genutzt, um menschliche Mitarbeiter zuverlässig zu erkennen, so dass ein konventioneller Roboter in Folge als personenbewusster Cobot arbeiten kann.
Das System ist modular und kann leicht mit weiteren Sensoren verschiedenster Art erweitert werden. Es kann an verschiedene Marken von Industrierobotern angepasst werden und lässt sich schnell an bestehenden Robotersystemen integrieren. Die vier vom System bereitgestellten Sicherheitsausgänge können dazu verwendet werden - abhängig von der durchdrungenen Überwachungszone - entweder eine Warnung auszugeben, die Bewegung des Roboters auf eine sichere Geschwindigkeit zu verlangsamen, oder den Roboter sicher anzuhalten. Sobald alle Zonen wieder als „eindeutig frei von Personen“ identifiziert sind, kann der Roboter wieder beschleunigen, seine ursprüngliche Bewegung wiederaufnehmen und die Arbeit fortsetzen.
Demand forecast
(2020)
This volume of the series Springer Briefs in Space Life Sciences explains the physics and biology of radiation in space, defines various forms of cosmic radiation and their dosimetry, and presents a range of exposure scenarios. It also discusses the effects of radiation on human health and describes the molecular mechanisms of heavy charged particles’ deleterious effects in the body. Lastly, it discusses countermeasures and addresses the vital question: Are we ready for launch?
Written for researchers in the space life sciences and space biomedicine, and for master’s students in biology, physics, and medicine, the book will also benefit all non-experts endeavoring to understand and enter space.
Im Rahmen dieser Arbeit wurden Resorcinol-Formaldehyd-Aerogele zur Anwendung in Kreislaufwärmerohren (LHP) als Dochtmaterial entwickelt. Aerogele als Dochtmaterial bilden aufgrund der hohen Porosität und der effektiven Kapillarwirkung eine gute Grundvoraussetzung für Stoff- und Wärmetransport. Diese Eigenschaften können zu einer Verbesserung der Kühlleistung einer Wärmepumpe beitragen. Dazu wurden Aerogele in Dochtform synthetisiert und anschließend erfolgte die Bestimmung der skelettalen Dichte, umhüllenden Dichte, Porosität und Gaspermeabilität. Zusätzlich wurde ein Test zum Schwellverhalten entwickelt. Außerdem wurden die Proben zur Fa. Allatherm gesendet, um die Anforderungen an die entwickelten RFAerogele in Dochtform zu prüfen. Die mechanische Bearbeitbarkeit der Aerogele konnte verbessert werden. Die Porosität und die Gaspermeabilität der untersuchten Aerogele lagen in einem optimalen Bereich. Nur die Durchgangsporengröße der Aerogele, die mittels Gasblasendruck-Analyse bestimmt wurde, benötigt weitere Rezeptentwicklungen und Messungen, um die größte Durchgangspore in Richtung 1 µm einzugrenzen.
Foreword to the Special Section on the Symposium on Virtual and Augmented Reality 2019 (SVR 2019)
(2020)
Discrimination and classification of eight strains related to meat spoilage microorganisms commonly found in poultry meat were successfully carried out using two dispersive Raman spectrometers (Microscope and Portable Fiber-Optic systems) in combination with chemometric methods. Principal Components Analysis (PCA) and Multi-Class Support Vector Machines (MC-SVM) were applied to develop discrimination and classification models. These models were certified using validation data sets which were successfully assigned to the correct bacterial genera and even to the right strain. The discrimination of bacteria down to the strain level was performed for the pre-processed spectral data using a 3-stage model based on PCA. The spectral features and differences among the species on which the discrimination was based were clarified through PCA loadings. In MC-SVM the pre-processed spectral data was subjected to PCA and utilized to build a classification model. When using the first two components, the accuracy of the MC-SVM model was 97.64% and 93.23% for the validation data collected by the Raman Microscope and the Portable Fiber-Optic Raman system, respectively. The accuracy reached 100% for the validation data by using the first eight and ten PC’s from the data collected by Raman Microscope and by Portable Fiber-Optic Raman system, respectively. The results reflect the strong discriminative power and the high performance of the developed models, the suitability of the pre-processing method used in this study and that the low accuracy of the Portable Fiber-Optic Raman system does not adversely affect the discriminative power of the developed models.
Due to the popularity of the Internet and the networked services that it facilitates, networked devices have become increasingly common in both the workplace and everyday life in recent years—following the trail blazed by smartphones. The data provided by these devices allow for the creation of rich user profiles. As a result, the collection, processing and exchange of such personal data have become drivers of economic growth. History shows that the adoption of new technologies is likely to influence both individual and societal concepts of privacy. Research into privacy has therefore been confronted with continuously changing concepts due to technological progress. From a legal perspective, privacy laws that reflect social values are sought. Privacy enhancing technologies are developed or adapted to take account of technological development. Organizations must also identify protective measures that are effective in terms of scalability and automation. Similarly, research is being conducted from the perspective of Human-Computer Interaction (HCI) to explore design spaces that empower individuals to manage their protection needs with regard to novel data, which they may perceive as sensitive. Taking such an HCI perspective with regard to understanding privacy management on the Internet of Things (IoT), this research mainly focuses on three interrelated goals across the fields of application: 1. Exploring and analyzing how people make sense of data, especially when managing privacy and data disclosure; 2. Identifying, framing and evaluating potential resources for designing sense-making processes; and 3. Exploring the fitness of the identified concepts for inclusion in legal and technical perspectives on supporting decisions regarding privacy on the IoT. Although this work's point of departure is the HCI perspective, it emphasizes the importance of the interrelationships among seemingly independent perspectives. Their interdependence is therefore also emphasized and taken into account by subscribing to a user-centered design process throughout this study. More specifically, this thesis adopts a design case study approach. This approach makes it possible to conduct full user-centered design lifecycles in a concrete application case with participants in the context of everyday life. Based on this approach, it was possible to investigate several domains of the IoT that are currently relevant, namely smart metering, smartphones, smart homes and connected cars. The results show that the participants were less concerned about (raw) data than about the information that could potentially be derived from it. Against the background of the constant collection of highly technical and abstract data, the content of which only becomes visible through the application of complex algorithms, this study indicates that people should learn to explore and understand these data flexibly, and provides insights in how to design for supporting this aim. From the point of view of design for usable privacy protection measures, the information that is provided to users about data disclosure should be focused on the consequences thereof for users' environments and life. A related concept from law is “informed consent,” which I propose should be further developed in order to implement usable mechanisms for individual privacy protection in the era of the IoT. Finally, this thesis demonstrates how research on HCI can be methodologically embedded in a regulative process that will inform both the development of technology and the drafting of legislation.
Solving differential-algebraic equations (DAEs) efficiently by means of appropriate numerical schemes for time-integration is an ongoing topic in applied mathematics. In this context, especially when considering large systems that occur with respect to many fields of practical application effective computation becomes relevant. In particular, corresponding examples are given when having to simulate network structures that consider transport of fluid and gas or electrical circuits. Due to the stiffness properties of DAEs, time-integration of such problems generally demands for implicit strategies. Among the schemes that prove to be an adequate choice are linearly implicit Rung-Kutta methods in the form of Rosenbrock-Wanner (ROW) schemes. Compared to fully implicit methods, they are easy to implement and avoid the solution of non-linear equations by including Jacobian information within their formulation. However, Jacobian calculations are a costly operation. Hence, necessity of having to compute the exact Jacobian with every successful time-step proves to be a considerable drawback. To overcome this drawback, a ROW-type method is introduced that allows for non-exact Jacobian entries when solving semi-explicit DAEs of index one. The resulting scheme thus enables to exploit several strategies for saving computational effort. Examples include using partial explicit integration of non-stiff components, utilizing more advantageous sparse Jacobian structures or making use of time-lagged Jacobian information. In fact, due to the property of allowing for non-exact Jacobian expressions, the given scheme can be interpreted as a generalized ROW-type method for DAEs. This is because it covers many different ROW-type schemes known from literature. To derive the order conditions of the ROW-type method introduced, a theory is developed that allows to identify occurring differentials and coefficients graphically by means of rooted trees. Rooted trees for describing numerical methods were originally introduced by J.C. Butcher. They significantly simplify the determination and definition of relevant characteristics because they allow for applying straightforward procedures. In fact, the theory presented combines strategies used to represent ROW-type methods with exact Jacobian for DAEs and ROW-type methods with non-exact Jacobian for ODEs. For this purpose, new types of vertices are considered in order to describe occurring non-exact elementary differentials completely. The resulting theory thus automatically comprises relevant approaches known from literature. As a consequence, it allows to recognize order conditions of familiar methods covered and to identify new conditions. With the theory developed, new sets of coefficients are derived that allow to realize the ROW-type method introduced up to orders two and three. Some of them are constructed based on methods known from literature that satisfy additional conditions for the purpose of avoiding effects of order reduction. It is shown that these methods can be improved by means of the new order conditions derived without having to increase the number of internal stages. Convergence of the resulting methods is analyzed with respect to several academic test problems. Results verify the theory determined and the order conditions found as only schemes satisfying the order conditions predicted preserve their order when using non-exact Jacobian expressions.
Network aggregation
(2020)
Telepresence robots allow people to participate in remote spaces, yet they can be difficult to manoeuvre with people and obstacles around. We designed a haptic-feedback system called “FeetBack," which users place their feet in when driving a telepresence robot. When the robot approaches people or obstacles, haptic proximity and collision feedback are provided on the respective sides of the feet, helping inform users about events that are hard to notice through the robot’s camera views. We conducted two studies: one to explore the usage of FeetBack in virtual environments, another focused on real environments.We found that FeetBack can increase spatial presence in simple virtual environments. Users valued the feedback to adjust their behaviour in both types of environments, though it was sometimes too frequent or unneeded for certain situations after a period of time. These results point to the value of foot-based haptic feedback for telepresence robot systems, while also the need to design context-sensitive haptic feedback.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
Efficient and comprehensive assessment of students knowledge is an imperative task in any learning process. Short answer grading is one of the most successful methods in assessing the knowledge of students. Many supervised learning and deep learning approaches have been used to automate the task of short answer grading in the past. We investigate why assistive grading with active learning would be the next logical step in this task as there is no absolute ground truth answer for any question and the task is very subjective in nature. We present a fast and easy method to harness the power of active learning and natural language processing in assisting the task of grading short answer questions. A webbased GUI is designed and implemented to incorporate an interactive short answer grading system. The experiments show that active learning saves the time and effort of graders in assessment and reaches the performance of supervised learning with less amount of graded answers for training.
Bedingt durch die zunehmende Rohstoffknappheit rückt die Suche nach alternativen, nachhaltigen Rohstoffen immer mehr in den Vordergrund. Im Hinblick auf effiziente chemische Verwertbarkeit bietet Lignin zahlreiche Vorteile für verschiedene Anwendungsbereiche, beispielsweise für biobasierte Polyurethanbeschichtungen, etwa zum Korrosionsschutz. Wesentliche Probleme bei der Verwendung von Lignin ergeben sich durch die Heterogenität dieses Naturstoffes sowie durch dessen geringe Polymerisations-Kompatibilität mit Polyolefinen; beide Faktoren beeinflussen u. a die mechanischen Eigenschaften entsprechender Lignin-basierter Polymere. Zudem hängt die konkrete Struktur und damit auch die physikalisch/chemischen Eigenschaften des Lignins stark von der jeweiligen Rohstoffquelle sowie dem Extraktionsverfahren ab.
Ziel dieser Arbeit war die Strukturaufklärung unmodifizierter und modifizierter Kraft-Lignine (KL) und die Untersuchung der Reaktivität aromatischer wie aliphatischer Hydroxygruppen in Abhängigkeit vom pH-Wert. Hierzu wurden unmodifizierte KL aus Schwarzlauge extrahiert und nachfolgend zunächst einer Soxhlet-Extraktion unterzogen, um in Methyltetrahydrofuran lösliche Lignin-Bestandteile – vornehmlich mit aromatischem Charakter – zu gewinnen und so eine verbesserte Löslichkeit auch im bei der nachfolgenden Polyurethansynthese als Lösemittel verwendeten THF zu gewährleisten. Überdies wurden die extrahierten KL via Demethylierung von Methoxygruppen chemisch modifiziert. Zudem wurde mittels nasschemischer Methoden sowie mit differentieller UV/VIS-Spektroskopie die Anzahl an für die Polymerisation erforderliche Hydroxygruppen quantifiziert. Im Anschluss erfolgte, unter besonderer Berücksichtigung ökologischer und ökonomischer Nachhaltigkeitsaspekte, die Synthese Lignin-basierter und funktionalisierter Polyurethanbeschichtungen. Die Oberflächenfunktionalisierung gestattete die Verbesserung der Oberflächenhomogenität sowie - via blend formation - das Einbetten von TPM-Farbstoffen in die Coatings. Hinsichtlich des Einflusses des bei der Extraktion gewählten pH-Wertes (pH = 2 - 5) auf das Verhalten der so gewonnenen KL wurde eine Veränderung sowohl der Struktur der Lignine als auch deren thermischer Stabilität beobachtet. Zudem wurde nachgewiesen, dass mit steigendem pH-Wert die Funktionalität/Reaktivität der aromatischen wie aliphatischen Hydroxygruppen im Lignin zunimmt. Aus unmodifiziertem KL wurden erfolgreich homogene Lignin-basierte Polyurethan-Coatings (LPU-Coatings) synthetisiert; diese LPU-Coatings zeigten bei Verwendung von bei höheren pH-Werten extrahierten KL homogenere, hydrophobe Oberflächenbeschaffenheit sowie gute thermische Stabilität. Zusätzliche Modifizierung der KL durch Demethylierung führte wegen der gesteigerten Anzahl freier Hydroxygruppen zu moderater Reaktivitätssteigerung und damit zu weiterer Verbesserung der Oberflächeneigenschaften hinsichtlich einer homogenen Oberflächenstruktur und -brillanz. Im Hinblick auf den Aspekt der Nachhaltigkeit wurden durch Syntheseoptimierung - bestehend aus Einstellung der Rohstoff-Korngröße, Ultraschallbehandlung und Verwendung des kommerziellen trifunktionellen Polyetherpolyols Lupranol® 3300 in Kombination mit Desmodur® L75 - die Löslichkeit von Lignin im Polyol sowie die thermische Stabilität der LPU-Coatings erhöht. Im Zuge der Syntheseoptimierungen konnte durch verkürzte Trocknungszeiten Energieeinsparung erzielt werden; zudem ließen sich dabei die eingesetzten Mengen kommerziell erhältlicher Chemikalien verringern; beide Einsparungen führten zu Kostenreduktion. Zugleich ließ sich so nicht nur der KL-Anteil im Polymer-Coating erhöhen: Durch eine optimierte wirtschaftliche Einstufensynthese ließ sich die Umsetzung dieser Vorgehensweise auch im Rahmen industrieller Anwendungen vereinfachen. Das Einbetten ausgewählter TPM-Farbstoffe (Kristallviolett und Brilliantgrün) in die LPU-Coatings durch blend formation führte nachweislich zu antimikrobieller Wirkung der Oberflächenbeschichtung, ohne dass die Oberflächenbeschaffenheit an Homogenität verlor. Die im Rahmen dieser Arbeit synthetisierten LPU-Coatings könnten zukünftig als Korrosionsschutz- und antimikrobielle-Beschichtungen ihre Anwendung finden, z. B. in der Landwirtschaft und im Bausektor.
Die im Rahmen der vorliegenden Arbeit gewonnen Erkenntnisse liefern einen Beitrag zur strukturellen Aufklärung des komplexen Biopolymers Lignin. Darüber hinaus stellen die Untersuchungen und Ergebnisse eine Grundlage für eine nachhaltige Herstellung von Lignin-basierten Polymerbeschichtungen dar, die in Zukunft immer mehr an Bedeutung gewinnen werden.
Opening the Career Counseling Black Box: Behavioral Mechanisms of Empathy and Working Alliance
(2020)
Unter dem Begriff „Additive Fertigung“ werden alle Verfahren zusammengefasst, die dazu dienen Formteile aufgrund von CAD-Daten schichtweise aufzubauen. Dabei geschieht der Aufbau stets selektiv, entsprechend der durch die CAD-Daten vorgegebenen Positionen. Während für Metalle und Kunststoffe diese Technik bereits in der industriellen Anwendung etabliert ist, befindet sie sich im Bereich der keramischen Werkstoffe noch in einer frühen Entwicklungs- bzw. Anwendungsphase. Um so wichtiger ist es, den aktuellen Stand der Technik und das Potenzial für keramische Bauteile umfassend darzustellen.