Refine
Departments, institutes and facilities
- Fachbereich Informatik (86)
- Fachbereich Wirtschaftswissenschaften (69)
- Fachbereich Angewandte Naturwissenschaften (57)
- Fachbereich Ingenieurwissenschaften und Kommunikation (55)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (47)
- Fachbereich Sozialpolitik und Soziale Sicherung (40)
- Institut für Verbraucherinformatik (IVI) (27)
- Institut für funktionale Gen-Analytik (IFGA) (21)
- Institute of Visual Computing (IVC) (18)
- Institut für Cyber Security & Privacy (ICSP) (16)
Document Type
- Article (134)
- Conference Object (83)
- Part of a Book (44)
- Book (monograph, edited volume) (24)
- Preprint (18)
- Doctoral Thesis (16)
- Report (10)
- Contribution to a Periodical (6)
- Master's Thesis (6)
- Research Data (5)
Year of publication
- 2020 (361) (remove)
Keywords
- Digitalisierung (5)
- Inborn error of metabolism (3)
- Innovation (3)
- Lehrbuch (3)
- Organic aciduria (3)
- Quality diversity (3)
- Shared autonomous vehicles (3)
- Usable Privacy (3)
- Usable Security (3)
- post-buckling (3)
Im Rahmen dieser Arbeit wurden Resorcinol-Formaldehyd-Aerogele zur Anwendung in Kreislaufwärmerohren (LHP) als Dochtmaterial entwickelt. Aerogele als Dochtmaterial bilden aufgrund der hohen Porosität und der effektiven Kapillarwirkung eine gute Grundvoraussetzung für Stoff- und Wärmetransport. Diese Eigenschaften können zu einer Verbesserung der Kühlleistung einer Wärmepumpe beitragen. Dazu wurden Aerogele in Dochtform synthetisiert und anschließend erfolgte die Bestimmung der skelettalen Dichte, umhüllenden Dichte, Porosität und Gaspermeabilität. Zusätzlich wurde ein Test zum Schwellverhalten entwickelt. Außerdem wurden die Proben zur Fa. Allatherm gesendet, um die Anforderungen an die entwickelten RFAerogele in Dochtform zu prüfen. Die mechanische Bearbeitbarkeit der Aerogele konnte verbessert werden. Die Porosität und die Gaspermeabilität der untersuchten Aerogele lagen in einem optimalen Bereich. Nur die Durchgangsporengröße der Aerogele, die mittels Gasblasendruck-Analyse bestimmt wurde, benötigt weitere Rezeptentwicklungen und Messungen, um die größte Durchgangspore in Richtung 1 µm einzugrenzen.
This volume of the series Springer Briefs in Space Life Sciences explains the physics and biology of radiation in space, defines various forms of cosmic radiation and their dosimetry, and presents a range of exposure scenarios. It also discusses the effects of radiation on human health and describes the molecular mechanisms of heavy charged particles’ deleterious effects in the body. Lastly, it discusses countermeasures and addresses the vital question: Are we ready for launch?
Written for researchers in the space life sciences and space biomedicine, and for master’s students in biology, physics, and medicine, the book will also benefit all non-experts endeavoring to understand and enter space.
Open Innovation
(2020)
Reinforcement learning (RL) algorithms should learn as much as possible about the environment but not the properties of the physics engines that generate the environment. There are multiple algorithms that solve the task in a physics engine based environment but there is no work done so far to understand if the RL algorithms can generalize across physics engines. In this work, we compare the generalization performance of various deep reinforcement learning algorithms on a variety of control tasks. Our results show that MuJoCo is the best engine to transfer the learning to other engines. On the other hand, none of the algorithms generalize when trained on PyBullet. We also found out that various algorithms have a promising generalizability if the effect of random seeds can be minimized on their performance.
The present thesis elucidates the development of (i) a series of small molecule inhibitors reacting in a covalent-irreversible manner with the targeted proteases and (ii) a fluorescently labeled activity-based probe as a pharmacological tool compound for investigation of specific functions of the mentioned enzymes in vitro. Herein, the rational design, organic synthesis and quantitative structure-activity-relationships are described extensively.
Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks.
In this paper we introduce the Perception for Autonomous Systems (PAZ) software library. PAZ is a hierarchical perception library that allow users to manipulate multiple levels of abstraction in accordance to their requirements or skill level. More specifically, PAZ is divided into three hierarchical levels which we refer to as pipelines, processors, and backends. These abstractions allows users to compose functions in a hierarchical modular scheme that can be applied for preprocessing, data-augmentation, prediction and postprocessing of inputs and outputs of machine learning (ML) models. PAZ uses these abstractions to build reusable training and prediction pipelines for multiple robot perception tasks such as: 2D keypoint estimation, 2D object detection, 3D keypoint discovery, 6D pose estimation, emotion classification, face recognition, instance segmentation, and attention mechanisms.
Mendelian diseases of dysregulated canonical NF-κB signaling: From immunodeficiency to inflammation
(2020)
Das Buch schlägt die Brücke zwischen den betriebswirtschaftlich-organisatorischen Methoden und deren digitaler Umsetzung, denn Prozessmanagement heißt zunehmend Gestaltung betrieblicher Aufgaben. Neben methodischen Grundlagen bietet das Werk viele Praxisbeispiele und Übungen. Das Buch von Prof. Gadatsch gilt mittlerweile als der "aktuelle Klassiker", DAS maßgebliche Standardwerk zur IT-gestützten Gestaltung von Geschäftsprozessen.
The Concept of Purpose, Travelling, and Connectivity: Three Pillars of Organization and Leadership
(2020)
Analytische Chemie II
(2020)
Dieses Arbeitsbuch führt durch das erfolgreiche Lehrbuch Skoog/Holler/Crouch, Instrumentelle Analytik und ist vor allem für das Selbststudium konzipiert. In fünf Teilen werden die Vorlesungsinhalte der fortgeschritteneren Analytischen Chemie zusammengefasst und anhand ausgewählter Beispiele erläutert: Mit der Untersuchung von Molekülen befassen sich Massenspektrometrie und Kernresonanzspektroskopie, zudem werden zahlreiche elektroanalytische Methoden wie Potentiometrie, Coulometrie, Amperometrie und Voltammetrie behandelt. In einem Überblick über speziellere Verfahren der Analytik geht es unter anderem ebenso um den Einsatz radioaktiver Substanzen und die Nutzung verschiedener Fluoreszenzverfahren wie um Methoden der Informationsgewinnung in der zunehmend wichtigen elektrochemischen und optischen Sensortechnik sowie deren Automatisierbarkeit. Den Abschluss bildet eine Zusammenfassung verschiedener Prinzipien und Anwendungsmethoden der Statistik, die im Rahmen der Analytik schlichtweg unverzichtbar sind. Um das selbstständige Lernen zu erleichtern, wird dabei in allen Teilen des Buches immer wieder auf essenzielle Abschnitte und Abbildungen des Lehrbuches verwiesen.
Validierung einer Web-Applikation zum Fern-Monitoring von Belastungs- und Erholungsparametern
(2020)
Simultan zur agilen Entwicklung einer Web-Applikation, die Parameter der Belastungs- und Beanspruchungssteuerung erfasst, wurden die implementierten Belastungs- und Erholungs-parameter an freiwilligen Testern/innen in der Praxis überprüft. Um sowohl die Applikation als auch die z.T. selbst entwickelten Kenngrößen auf ihre externe Validität hin zu bewerten, werden diese regressionsanalytisch bearbeitet.
Dieses Buch bietet einen leicht verständlichen Einstieg in die Thematik des Data Minings und der Prädiktiven Analyseverfahren. Als Methodensammlung gedacht, bietet es zu jedem Verfahren zunächst eine kurze Darstellung der Theorie und erklärt die zum Verständnis notwendigen Formeln. Es folgt jeweils eine Illustration der Verfahren mit Hilfe von Beispielen, die mit dem Programmpaket R erarbeitet werden.
Zum Abschluss wird eine einfache Möglichkeit präsentiert, mit der die Performancewerte verschiedener Verfahren mit statistischen Mitteln verglichen werden können. Zum Einsatz kommen hierbei geeignete Grafiken und Konfidenzintervalle.
Das Buch verzichtet nicht auf Theorie, es präsentiert jedoch so wenig Theorie wie möglich, aber so viel wie nötig und ist somit optimal für Studium und Selbststudium geeignet.
Digital Business
(2020)
Digital Business behandelt die Besonderheiten digitaler Geschäftsmodelle, den Umgang mit Daten, erläutert die Funktionsweise digitaler Märkte und deren Auswirkungen auf Servicefunktionen wie HR, Kommunikation, Finanzierung und Marketing. Zudem werden wesentliche Erfolgsfaktoren wie agiles Management und Customer Experience behandelt. Insgesamt haben 30 Experten mit ihrem spezifischem Know How an der Erstellung des praxisorientierten Litello-eBook mitgearbeitet, dass sich auch gut als Basis für einschlägige Lehrveranstaltung anbietet.
Bionik
(2020)
Wie machen die das... kann angesichts der erstaunlichen Fähigkeiten mancher Lebewesen gefragt werden. Die Bionik fragt noch weiter …und wie kann man das nachmachen? Hier liegt ein Schwerpunkt dieses Lehrbuches, das die Bionik nicht nur an zahlreichen Beispielen erklärt, sondern auch eine Vorgehensweise für die Identifizierung biologischer Lösungen und deren Übertragung auf technische Anwendungen vermittelt. Basisinformationen der Biologie und Grundlagen der Konstruktionstechnik gewährleisten einen leichten Zugang zum Stoff. Mit dem 3D-Druck als Schlüsseltechnologie und der Thematisierung der Nachhaltigkeit geht das Buch zudem auf aktuelle Entwicklungen ein. Dieser ganzheitliche Blick auf die Bionik soll den Leser zur Durchführung bionischer Projekte befähigen und motivieren. (Verlagsangaben)
Virtueller Journalismus
(2020)
Fundamental hydrogen storage properties of TiFe-alloy with partial substitution of Fe by Ti and Mn
(2020)
TiFe intermetallic compound has been extensively studied, owing to its low cost, good volumetric hydrogen density, and easy tailoring of hydrogenation thermodynamics by elemental substitution. All these positive aspects make this material promising for large-scale applications of solid-state hydrogen storage. On the other hand, activation and kinetic issues should be amended and the role of elemental substitution should be further understood. This work investigates the thermodynamic changes induced by the variation of Ti content along the homogeneity range of the TiFe phase (Ti:Fe ratio from 1:1 to 1:0.9) and of the substitution of Mn for Fe between 0 and 5 at.%. In all considered alloys, the major phase is TiFe-type together with minor amounts of TiFe2 or \b{eta}-Ti-type and Ti4Fe2O-type at the Ti-poor and rich side of the TiFe phase domain, respectively. Thermodynamic data agree with the available literature but offer here a comprehensive picture of hydrogenation properties over an extended Ti and Mn compositional range. Moreover, it is demonstrated that Ti-rich alloys display enhanced storage capacities, as long as a limited amount of \b{eta}-Ti is formed. Both Mn and Ti substitutions increase the cell parameter by possibly substituting Fe, lowering the plateau pressures and decreasing the hysteresis of the isotherms. A full picture of the dependence of hydrogen storage properties as a function of the composition will be discussed, together with some observed correlations.
Medien der Avantgarde
(2020)
"Durchdringen - Klarheit schaffen, Barrieren überwinden, Gehör finden" lautet diesmal das Motto des Jahresberichts. Er zeigt, wie die Hochschule nach Antworten auf die vielschichtigen, komplexen Fragen der Zeit sucht. Ob Digitalisierung, Klimawandel oder gesellschaftliche Verantwortung - Wissenschaftlerinnen und Wissenschaftler durchdringen ihre Themengebiete, und sie müssen am Ende mit ihren Erkenntnissen Gehör finden.
"Diffusion - create clarity, overcome barriers, be heard" is the title of this year's annual report. It shows how the university is searching for answers to the multilayered, complex questions of our time. Whether digitalisation, climate change or social responsibility - scientists are getting through their subject areas and in the end they have to make their findings heard.
Der Globale Migrationspakt der Vereinten Nationen beschreibt globale Kompetenzpartnerschaften bzw. Global Skills Partnerships (GSP) als eine innovative Möglichkeit, die Fachkräftebasis global zu stärken, bleibt hinsichtlich ihrer Ausgestaltung aber recht vage. Auch die hierzulande unter der Bezeichnung „transnationale Ausbildungspartnerschaften“ laufenden Aktivitäten sind in Zahl und Reichweite sehr begrenzt und es liegen bisweilen kaum empirische Analysen zu ihnen vor. Hier schafft die vorliegende Expertise von Michael Sauer und Jurica Volarevic Abhilfe, indem sie existierende Ausbildungspartnerschaften insbesondere aus dem Erfahrungsraum der Republik Kosovo vorstellt und analytisch beleuchtet. Mithilfe einer Bestandsaufnahme der konzeptionellen Diskurse und Praxiserfahrungen wird ein Kategorisierungsvorschlag unternommen, mit dessen Hilfe die empirische Vielfalt besser geordnet und konzeptionell greifbar gemacht werden kann: Transnationale Qualifizierungs- und Mobilitätspartnerschaften (tQMP).
Design ist allgegenwärtig - es durchtränkt gleichsam das Leben, oftmals unbewusst, doch immer durchscheinend und folgenhaft. Es ist Bestandteil des Habitus und unverzichtbarer Teil einer jeden Identität. Selbst im bewussten Verzicht auf Design kommt eine spezifische Designästhetik zum Ausdruck, die anders sein möchte. Doch welche Normierungen greifen hier und dienen als Orientierung für Absetzbewegungen? Dies zeigt sich vor allem in der Prägung durch massenmediale Diskurse. Die Beiträge des Bandes entwickeln zu diesem zentralen Dispositiv einen theoretischen wie praktischen Rahmen und reflektieren Indikatoren für entsprechende Leistungen. (Verlagsangaben)
Dürrenmatt in der Schule
(2020)
Die Motive für die Einführung von Public Cloud Services liegen oft im Bereich der Kosteneinsparung und Qualitätsverbesserung. Vielfach werden bei der erstmaligen Einführung vermeidbare Fehler gemacht, die im Nachhinein den Erfolg des Vorhabens schmälern. Der Beitrag beschreibt ein aus Sicht der Beratungspraxis bewährtes Vorgehensmodell für die Einführung und Nutzung von Public Cloud Services unter besonderer Berücksichtigung von Microsoft Cloud Services.
An essential measure of autonomy in service robots designed to assist humans is adaptivity to the various contexts of human-oriented tasks. These robots may have to frequently execute the same action, but subject to subtle variations in task parameters that determine optimal behaviour. Such actions are traditionally executed by robots using pre-determined, generic motions, but a better approach could utilize robot arm maneuverability to learn and execute different trajectories that work best in each context.
In this project, we explore a robot skill acquisition procedure that allows incorporating contextual knowledge, adjusting executions according to context, and improvement through experience, as a step towards more adaptive service robots. We propose an apprenticeship learning approach to achieving context-aware action generalisation on the task of robot-to-human object hand-over. The procedure combines learning from demonstration, with which a robot learns to imitate a demonstrator’s execution of the task, and a reinforcement learning strategy, which enables subsequent experiential learning of contextualized policies, guided by information about context that is integrated into the learning process. By extending the initial, static hand-over policy to a contextually adaptive one, the robot derives and executes variants of the demonstrated action that most appropriately suit the current context. We use dynamic movement primitives (DMPs) as compact motion representations, and a model-based Contextual Relative Entropy Policy Search (C-REPS) algorithm for learning policies that can specify hand-over position, trajectory shape, and execution speed, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours.
We demonstrate the algorithm’s ability to learn context-dependent hand-over positions, and new trajectories, guided by suitable reward functions, and show that the current DMP implementation limits learning context-dependent execution speeds. We additionally conduct a user study involving participants assuming different postures and receiving an object from the robot, which executes hand-overs by either exclusively imitating a demonstrated motion, or selecting hand-over positions based on learned contextual policies and adapting its motion accordingly. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
YAWL (Yet Another Workflow Language) is an open source Business Process Management System, first released in 2003. YAWL grew out of a university research environment to become a unique system that has been deployed worldwide as a laboratory environment for research in Business Process Management and as a productive system in other scientific domains.
Multiwalled carbon nanotubes (MWCNTs) were easily and efficiently functionalised with highly cross-linked polyamines. The radical polymerisation of two bis-vinylimidazolium salts in the presence of pristine MWCNTs and azobisisobutyronitrile (AIBN) as a radical initiator led to the formation of materials with a high functionalisation degree. The subsequent treatment with sodium borohydride gave rise to the reduction of imidazolium moieties with the concomitant formation of secondary and tertiary amino groups. The obtained materials were characterised by thermogravimetric analysis (TGA), elemental analysis, solid state 13C-NMR, Fourier-transform infrared spectroscopy (FT-IR), transmission electron microscopy (TEM), potentiometric titration, and temperature programmed desorption of carbon dioxide (CO2-TPD). One of the prepared materials was tested as a heterogeneous base catalyst in C–C bond forming reactions such as the Knoevenagel condensation and Henry reaction. Furthermore, two examples concerning a sequential one-pot approach involving two consecutive reactions, namely Knoevenagel and Michael reactions, were reported.
Die 2010er Jahre waren durch einen starken Anstieg der Mieten und der Kaufpreise gerade in den Ballungsräumen gekennzeichnet. Hieraus wird teilweise abgeleitet, dass Wohnen "die soziale Frage unserer Zeit" sei. In dieser Analyse wird auf der Grundlage des Sozio-oekonomischen Panels (SOEP) die Entwicklung der Wohnkostenbelastung im Längs- und Querschnitt analysiert. Außerdem wird die historische Bedeutung der sozialen Frage erläutert. Insgesamt zeigt sich, dass die starke Arbeitsmarktentwicklung in Kombination mit einer Reduktion der Wohnflächen die Wohnkostenbelastung bei vielen Haushalten konstant gehalten hat. Nur bei wenigen Haushalten gibt es tatsächlich einen merklichen Anstieg der Belastung, jedoch bei gleichzeitig gestiegener Zufriedenheit mit der Wohnsituation. Auch wenn Wohnen damit nicht als die soziale Frage unserer Zeit bezeichnet werden kann, brauchen doch zahlreiche Haushalte Unterstützung. Insbesondere aufgrund der virusbedingten Wirtschaftskrise im Jahr 2020 dürfte deren Zahl noch steigen. Die dafür zur Verfügung stehenden Instrumente wie das Wohngeld und Sozialwohnungen sollten gestärkt werden, aber gerade bei Sozialwohnungen sollte darauf geachtet werden, die soziale Treffsicherheit zu verbessern.
Risikobasierte Authentifizierung (RBA) ist eine adaptive Sicherheitsmaßnahme zur Stärkung passwortbasierter Authentifizierung. Sie zeichnet Merkmale während des Logins auf und fordert zusätzliche Authentifizierung an, wenn sich Ausprägungen dieser Merkmale signifikant von den bisher bekannten unterscheiden. RBA bietet das Potenzial für gebrauchstauglichere Sicherheit. Bisher jedoch wurde RBA noch nicht ausreichend im Bezug auf Usability, Sicherheit und Privatsphäre untersucht. Dieser Extended Abstract legt das geplante Dissertationsvorhaben zur Erforschung von RBA dar. Innerhalb des Vorhabens konnte bereits eine Grundlagenstudie und eine darauf aufbauende Laborstudie durchgeführt werden. Wir präsentieren erste Ergebnisse dieser Studien und geben einen Ausblick auf weitere Schritte.
The ongoing coronavirus disease 2019 (COVID-19) pandemic threatens global health thereby causing unprecedented social, economic, and political disruptions. One way to prevent such a pandemic is through interventions at the human-animal-environment interface by using an integrated One Health (OH) approach. This systematic literature review documented the three coronavirus outbreaks, i.e. SARS, MERS, COVID-19, to evaluate the evolution of the OH approach, including the identification of key OH actions taken for prevention, response, and control.
The OH understandings identified were categorized into three distinct patterns: institutional coordination and collaboration, OH in action/implementation, and extended OH (i.e. a clear involvement of the environmental domain). Across all studies, OH was most often framed as OH in action/implementation and least often in its extended meaning. Utilizing OH as institutional coordination and collaboration and the extended OH both increased over time. OH actions were classified into twelve sub-groups and further categorized as classical OH actions (i.e. at the human-animal interface), classical OH actions with outcomes to the environment, and extended OH actions.
The majority of studies focused on human-animal interaction, giving less attention to the natural and built environment. Different understandings of the OH approach in practice and several practical limitations might hinder current efforts to achieve the operationalization of OH by combining institutional coordination and collaboration with specific OH actions. The actions identified here are a valuable starting point for evaluating the stage of OH development in different settings. This study showed that by moving beyond the classical OH approach and its actions towards a more extended understanding, OH can unfold its entire capacity thereby improving preparedness and mitigating the impacts of the next outbreak.
Studi ini bertujuan untuk memvalidasi perangkat penilaian efikasi diri yang berkaitan dengan kesehatan kerja yang dikembangkan pada tahap studi pendahuluan. Skala Efikasi Diri untuk Kesehatan Kerja (SEDKK) berlandaskan konsep efikasi diri pada teori kognitif sosial yang mengukur empat faktor yang berpengaruh pada kesehatan setiap individu yang bekerja, seperti: perilaku makan dan minum, tidur, keamanan dan kesehatan kerja, serta kegiatan pemulihan dari stres bekerja. Hasil analisis faktor eksploratori menunjukan bahwa ada empat faktor yang terefleksikan dari butir-butir SEDKK. Validitas konstruk SEDKK dapat dibuktikan dengan korelasi positif antara SEDKK dan skala Efikasi Diri Umum yang sangat signifikan. Pengujian validitas kriteria dapat ditelusuri melalui efek SEDKK terhadap kondisi kesehatan umum, kepuasan akan kesehatan pribadi, keseimbangan kehidupan kerja/KKK (work life balance), perilaku sehat, dan perilaku berisiko. Namun demikian, asumsi mengenai reliabilitas tes berulang (test-retest) pada penelitian ini ditolak. Implikasi dan saran-saran untuk penelitian selanjutnya didiskusikan pada artikel ini.
Any political phenomenon can only be properly understood in its broader con-text. Questions of international cooperation are thus necessarily framed by his-torical processes and relations of power. We therefore start our first discussion with an examination of the global ‘status quo’ and embed the topic of this pub-lication, ODA graduation, into the shifting world order, analysing current roles and settings in international relations and identifying changes in positions, sta-tus and categories. What are the overarching issues determining world politics and who are the old and the new actors driving them? What is the impact of these global shifts on international cooperation, especially development coop-eration? Of what relevance are roles, status and categories and what is the im-pact of changes in positions and relations? What challenges face multilateralism and what ways exist to maintain and renew strategic partnerships and shared values?
Am Beispiel einer jahrelang in Präsenz gelehrten Veranstaltung mit Vorlesungen, Übungen und Laborpraktika wird gezeigt, wie die Vermittlung prüfungsrelevanter Kompetenzen auch „online“ gelang. Das passende „Setting“ des Lehr- und Lernprozesses unter Beachtung von Handlungsempfehlungen ist auch für die Zukunft relevant.
Bedingt durch die fortlaufende Digitalisierung und den Big Data-Trend stehen immer mehr Daten zur Verfügung. Daraus resultieren viele Potenziale – gerade für Unternehmen. Die Fähigkeit zur Bewältigung und Auswertung dieser Daten schlägt sich in der Rolle des Data Scientist nieder, welcher aktuell einer der gefragtesten Berufe ist. Allerdings ist die Integration von Daten in Unternehmensstrategie und -kultur eine große Herausforderung. So müssen komplexe Daten und Analyseergebnisse auch nicht datenaffinen Stakeholdern kommuniziert werden. Hier kommt dem Data Storytelling eine entscheidende Rolle zu, denn um mit Daten eine Veränderung hervorrufen zu können, müssen vorerst Verständnis und Motivation für den Sachverhalt zielgruppenspezifisch geschaffen werden. Allerdings handelt es sich bei Data Storytelling noch um ein Nischenthema. Diese Arbeit leitet mithilfe einer systematischen Literaturanalyse die Erfolgsfaktoren von Data Storytelling für eine effektive und effiziente Kommunikation von Daten her, um Data Scientists in Forschung und Praxis bei der Kommunikation der Daten und Ergebnisse zu unterstützen.
An essential measure of autonomy in assistive service robots is adaptivity to the various contexts of human-oriented tasks, which are subject to subtle variations in task parameters that determine optimal behaviour. In this work, we propose an apprenticeship learning approach to achieving context-aware action generalization on the task of robot-to-human object hand-over. The procedure combines learning from demonstration and reinforcement learning: a robot first imitates a demonstrator’s execution of the task and then learns contextualized variants of the demonstrated action through experience. We use dynamic movement primitives as compact motion representations, and a model-based C-REPS algorithm for learning policies that can specify hand-over position, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours. We additionally conduct a user study involving participants assuming different postures and receiving an object from a robot, which executes hand-overs by either imitating a demonstrated motion, or adapting its motion to hand-over positions suggested by the learned policy. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.
Short summary
This dataset accompanies our paper
A. Mitrevski, P. G. Plöger, and G. Lakemeyer, "Representation and Experience-Based Learning of Explainable Models for Robot Action Execution," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
Contents
There are three zip archives included, each of them a dump of a MongoDB database corresponding to one of the three experiments in the paper:
Grasping a drawer handle (handle_drawer_logs.zip)
Grasping a fridge handle (handle_fridge_logs.zip)
Pulling an object (pull_logs.zip)
All three experiments were performed with a Toyota HSR. Only the data necessary for learning the models used in our experiments are included here.
Usage
After unzipping the archives, each database can be restored with the command
mongorestore [directory_name]
This will create a MongoDB database with the name of the directory (handle_drawer_logs, handle_fridge_logs, and pull_logs).
Code for processing the data and model learning can be found in our <a href="https://github.com/alex-mitrevski/explainable-robot-execution-models">GitHub repository.
It is only a matter of time until autonomous vehicles become ubiquitous; however, human driving supervision will remain a necessity for decades. To assess the drive's ability to take control over the vehicle in critical scenarios, driver distractions can be monitored using wearable sensors or sensors that are embedded in the vehicle, such as video cameras. The types of driving distractions that can be sensed with various sensors is an open research question that this study attempts to answer. This study compared data from physiological sensors (palm electrodermal activity (pEDA), heart rate and breathing rate) and visual sensors (eye tracking, pupil diameter, nasal EDA (nEDA), emotional activation and facial action units (AUs)) for the detection of four types of distractions. The dataset was collected in a previous driving simulation study. The statistical tests showed that the most informative feature/modality for detecting driver distraction depends on the type of distraction, with emotional activation and AUs being the most promising. The experimental comparison of seven classical machine learning (ML) and seven end-to-end deep learning (DL) methods, which were evaluated on a separate test set of 10 subjects, showed that when classifying windows into distracted or not distracted, the highest F1-score of 79%; was realized by the extreme gradient boosting (XGB) classifier using 60-second windows of AUs as input. When classifying complete driving sessions, XGB's F1-score was 94%. The best-performing DL model was a spectro-temporal ResNet, which realized an F1-score of 75%; when classifying segments and an F1-score of 87%; when classifying complete driving sessions. Finally, this study identified and discussed problems, such as label jitter, scenario overfitting and unsatisfactory generalization performance, that may adversely affect related ML approaches.
This dissertation presents a probabilistic state estimation framework for integrating data-driven machine learning models and a deformable facial shape model in order to estimate continuous-valued intensities of 22 different facial muscle movements, known as Action Units (AU), defined in the Facial Action Coding System (FACS). A practical approach is proposed and validated for integrating class-wise probability scores from machine learning models within a Gaussian state estimation framework. Furthermore, driven mass-spring-damper models are applied for modelling the dynamics of facial muscle movements. Both facial shape and appearance information are used for estimating AU intensities, making it a hybrid approach. Several features are designed and explored to help the probabilistic framework to deal with multiple challenges involved in automatic AU detection. The proposed AU intensity estimation method and its features are evaluated quantitatively and qualitatively using three different datasets containing either spontaneous or acted facial expressions with AU annotations. The proposed method produced temporally smoother estimates that facilitate a fine-grained analysis of facial expressions. It also performed reasonably well, even though it simultaneously estimates intensities of 22 AUs, some of which are subtle in expression or resemble each other closely. The estimated AU intensities tended to the lower range of values, and were often accompanied by a small delay in onset. This shows that the proposed method is conservative. In order to further improve performance, state-of-the-art machine learning approaches for AU detection could be integrated within the proposed probabilistic AU intensity estimation framework.
Towards an Interaction-Centered and Dynamically Constructed Episodic Memory for Social Robots
(2020)
Describing the elephant: a foundational model of human needs, motivation, behaviour, and wellbeing
(2020)
Models of basic psychological needs have been present and popular in the academic and lay literature for more than a century yet reviews of needs models show an astonishing lack of consensus. This raises the question of what basic human psychological needs are and if this can be consolidated into a model or framework that can align previous research and empirical study. The authors argue that the lack of consensus arises from researchers describing parts of the proverbial elephant correctly but failing to describe the full elephant. Through redefining what human needs are and matching this to an evolutionary framework we can see broad consensus across needs models and neatly slot constructs and psychological and behavioural theories into this framework. This enables a descriptive model of drives, motives, and well-being that can be simply outlined but refined enough to do justice to the complexities of human behaviour. This also raises some issues of how subjective well-being is and should be measured. Further avenues of research and how to continue building this model and framework are proposed.
This study advances the research and methodological approach to measuring and understanding national-level destination competitiveness, sustainability and governance, by creating a model that could be of use for both developing and developed destinations. The study gives a detailed overview of the research field of measuring destination competitiveness and sustainability. It also identifies major predictors of destination competitiveness and sustainability and thereby presents destination researchers and practitioners with a useful list of priority areas, both from a global perspective and from the perspective of other similar destinations. Finally, the study identifies two major types of destination governance with implications for research, policy and practice across the destination life-cycle. The research deals with the analysis of the secondary data from the World Economic Forum Travel and Tourism Index (WEF T&T). Major types of destination governance and predictors of belonging to either one of the types, as well as inside cluster predictors have been extracted through a two-step cluster analysis. The results support the notion that a meaningful model of national-level destination governance needs to take into account different development levels of different destinations. The main limitation of the study is its typology creation approach, as it inevitably leads to simplifications.
Towards a conceptual framework for sustainable business models in the food and beverage industry
(2020)
The ability to finely segment different instances of various objects in an environment forms a critical tool in the perception tool-box of any autonomous agent. Traditionally instance segmentation is treated as a multi-label pixel-wise classification problem. This formulation has resulted in networks that are capable of producing high-quality instance masks but are extremely slow for real-world usage, especially on platforms with limited computational capabilities. This thesis investigates an alternate regression-based formulation of instance segmentation to achieve a good trade-off between mask precision and run-time. Particularly the instance masks are parameterized and a CNN is trained to regress to these parameters, analogous to bounding box regression performed by an object detection network.
In this investigation, the instance segmentation masks in the Cityscape dataset are approximated using irregular octagons and an existing object detector network (i.e., SqueezeDet) is modified to regresses to the parameters of these octagonal approximations. The resulting network is referred to as SqueezeDetOcta. At the image boundaries, object instances are only partially visible. Due to the convolutional nature of most object detection networks, special handling of the boundary adhering object instances is warranted. However, the current object detection techniques seem to be unaffected by this and handle all the object instances alike. To this end, this work proposes selectively learning only partial, untainted parameters of the bounding box approximation of the boundary adhering object instances. Anchor-based object detection networks like SqueezeDet and YOLOv2 have a discrepancy between the ground-truth encoding/decoding scheme and the coordinate space used for clustering, to generate the prior anchor shapes. To resolve this disagreement, this work proposes clustering in a space defined by two coordinate axes representing the natural log transformations of the width and height of the ground-truth bounding boxes.
When both SqueezeDet and SqueezeDetOcta were trained from scratch, SqueezeDetOcta lagged behind the SqueezeDet network by a massive ≈ 6.19 mAP. Further analysis revealed that the sparsity of the annotated data was the reason for this lackluster performance of the SqueezeDetOcta network. To mitigate this issue transfer-learning was used to fine-tune the SqueezeDetOcta network starting from the trained weights of the SqueezeDet network. When all the layers of the SqueezeDetOcta were fine-tuned, it outperformed the SqueezeDet network paired with logarithmically extracted anchors by ≈ 0.77 mAP. In addition to this, the forward pass latencies of both SqueezeDet and SqueezeDetOcta are close to ≈ 19ms. Boundary adhesion considerations, during training, resulted in an improvement of ≈ 2.62 mAP of the baseline SqueezeDet network. A SqueezeDet network paired with logarithmically extracted anchors improved the performance of the baseline SqueezeDet network by ≈ 1.85 mAP.
In summary, this work demonstrates that if given sufficient fine instance annotated data, an existing object detection network can be modified to predict much finer approximations (i.e., irregular octagons) of the instance annotations, whilst having the same forward pass latency as that of the bounding box predicting network. The results justify the merits of logarithmically extracted anchors to boost the performance of any anchor-based object detection network. The results also showed that the special handling of image boundary adhering object instances produces more performant object detectors.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path-traced results, but with a greatly reduced computational complexity, allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
Evaluation of a Multi-Layer 2.5D display in comparison to conventional 3D stereoscopic glasses
(2020)
In this paper we propose and evaluate a custom-build projection-based multilayer 2.5D display, consisting of three layers of images, and compare performance to a stereoscopic 3D display. Stereoscopic vision can increase the involvement and enhance game experience, however may induce possible side effects, e.g. motion sickness and simulator sickness. To overcome the disadvantage of multiple discrete depths, in our system perspective rendering and head-tracking is used. A study was performed to evaluate this display with 20 participants playing custom-designed games. The results indicated that the multi-layer display caused fewer side effects than the stereoscopic display and provided good usability. The participants also stated a better or equal spatial perception, while the cognitive load stayed the same.
Graph drawing with spring embedders employs a V x V computation phase over the graph's vertex set to compute repulsive forces. Here, the efficacy of forces diminishes with distance: a vertex can effectively only influence other vertices in a certain radius around its position. Therefore, the algorithm lends itself to an implementation using search data structures to reduce the runtime complexity. NVIDIA RT cores implement hierarchical tree traversal in hardware. We show how to map the problem of finding graph layouts with force-directed methods to a ray tracing problem that can subsequently be implemented with dedicated ray tracing hardware. With that, we observe speedups of 4x to 13x over a CUDA software implementation.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.