Refine
H-BRS Bibliography
- yes (300) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (69)
- Fachbereich Angewandte Naturwissenschaften (68)
- Fachbereich Wirtschaftswissenschaften (64)
- Institut für Technik, Ressourcenschonung und Energieeffizienz (TREE) (50)
- Fachbereich Ingenieurwissenschaften und Kommunikation (43)
- Fachbereich Sozialpolitik und Soziale Sicherung (33)
- Institut für Verbraucherinformatik (IVI) (18)
- Institute of Visual Computing (IVC) (18)
- Internationales Zentrum für Nachhaltige Entwicklung (IZNE) (13)
- Präsidium (13)
Document Type
- Article (105)
- Conference Object (75)
- Part of a Book (32)
- Book (monograph, edited volume) (20)
- Part of Periodical (17)
- Report (15)
- Preprint (10)
- Doctoral Thesis (8)
- Master's Thesis (6)
- Working Paper (3)
Year of publication
- 2019 (300) (remove)
Keywords
- Lehrbuch (4)
- lignin (4)
- Lignin (3)
- Navigation (3)
- work engagement (3)
- Aminoacylase (2)
- Chemie (2)
- Design (2)
- Drosophila (2)
- Exergame (2)
Traffic sign recognition is an important component of many advanced driving assistance systems, and it is required for full autonomous driving. Computational performance is usually the bottleneck in using large scale neural networks for this purpose. SqueezeNet is a good candidate for efficient image classification of traffic signs, but in our experiments it does not reach high accuracy, and we believe this is due to lack of data, requiring data augmentation. Generative adversarial networks can learn the high dimensional distribution of empirical data, allowing the generation of new data points. In this paper we apply pix2pix GANs architecture to generate new traffic sign images and evaluate the use of these images in data augmentation. We were motivated to use pix2pix to translate symbolic sign images to real ones due to the mode collapse in Conditional GANs. Through our experiments we found that data augmentation using GAN can increase classification accuracy for circular traffic signs from 92.1% to 94.0%, and for triangular traffic signs from 93.8% to 95.3%, producing an overall improvement of 2%. However some traditional augmentation techniques can outperform GAN data augmentation, for example contrast variation in circular traffic signs (95.5%) and displacement on triangular traffic signs (96.7 %). Our negative results shows that while GANs can be naively used for data augmentation, they are not always the best choice, depending on the problem and variability in the data.
Qualität als Erfolgsfaktor
(2019)
Analytische Chemie I
(2019)
Die unterschiedlichen Facetten der digitalen Zukunft zu beleuchten – sei es die aktive Gestaltungsaufgabe der Politik, die ethischen und moralischen Anpassungen durch Digitalisierung in der Gesellschaft oder die technische und wirtschaftliche Verantwortung – und in Bezug zueinander zu setzen, ist Aufgabe und Ziel dieser Publikation. Im Zuge der digitalen Transformation ist zudem der Ruf nach einer neuen Kultur für Gesellschaft, Politik und Wirtschaft geboten.
Meine Zeitung geht online
(2019)
The Learning Culture Survey (LCS) is a questionnaire-based research, investigating students’ perceptions of and expectations towards Higher Education (HE). The aim of this survey is to improve our understanding about the sources of cultural conflicts in educational scenarios. This understanding, shell help us to predict potential conflict situations and develop supportive measures.
After three years of development, the LCS was initialized in 2010 in South Korea and Germany. During the following years, the investigations were extended to further countries. The results, on the one hand, provided insights about the cultural context of HE in general and on the other hand, about specific (national / regional) characteristics of learners in HE. Most issues targeted with the questionnaire were directly linked to value systems. Thus, we expected from the beginning that the collected data would keep valid over longer periods of time. However, we had no evidence regarding the actual persistence of learning culture. For a study, designed to being implemented on a global scope and providing input for further applications, persistence is a basic condition to justify related investigations.
To answer the question on persistence, we repeated the LCS in our university every four years, between 2010 to 2018/19. Besides a small number of slight changes, explainable out of their situational context, the overall results kept consistent over the investigated years. In this paper, after an introduction of the LCS’ concept, setting and its general results from the past years, we present the insights from our most recently finalized longitudinal study on learning culture.
This handbook contains lots of interesting information for international students about studying at H-BRS and living in the Rhineland.
Change - shaping reality
(2019)
The media is considered to be the fourth pillar in a democratic country. It acts as an effective control mechanism to check the other branches of the government. But this is only consequential when the media functions in an independent and transparent fashion with trained and neutral professionals who are aware of the accountability and consequences of their work. All these factors together would further the country as a democratic institution. Traditionally, it was legacy media responsible for a one-to-many communication process. Their goal was to provide information to the citizens. But this changed with development in technology and the use of social media in daily life. The internet brought with it new media formats which are easily accessible but also unstructured. These lowered barriers of entry in the media enabled citizens to become active participants in the communication process. As a result, these citizens developed a different relationship with the already existing media wherein they were not only the receivers to information but also co-producers. Real-time information allows users to communicate with each other and in turn widely generate public opinion on internet platforms. A many-to-many communication style emerged. While on the one hand, this type of discourse could be an opportunity for citizens to exercise their fundamental freedom of speech and expression, it is on the other hand, proving to have a detrimental effect in two parts: Lack of neutrality, polarized views and pre-existing misconceptions on the part of citizens as well as algorithms and formation of echo-chambers on the part of technology. Some questions arise in this scenario about the capability of citizen journalists, the duties they should adhere to along with the enjoyment of their rights and freedoms, the risks involved in an unchecked method of communication and the effect of citizen journalism in the democratic process.
Chemie ist viel einfacher, als es häufig heißt. Dieses Buch soll dazu beitragen, ihr Interesse an diesem Fach zu wecken oder zu vertiefen. Alle grundlegenden Prinzipien der Chemie werden nachvollziehbar dargestellt. Querbezüge und Zusammenhänge zwischen den verschiedenen Fachgebieten werden gezeigt. Sie werden keine Formel finden, deren Herleitung Sie nicht nachvollziehen können. Am Ende fast jeden Kapitels gibt es Übungsaufgaben. Ausführliche Lösungen gibt es natürlich auch. Das sollte nicht nur für die Prüfungen der ersten Semester reichen, sondern Ihnen auch ein sicheres Fundament für Ihr weiteres Studium bieten.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
Currently, a variety of methods exist for creating different types of spatio-temporal world models. Despite the numerous methods for this type of modeling, there exists no methodology for comparing the different approaches or their suitability for a given application e.g. logistics robots. In order to establish a means for comparing and selecting the best-fitting spatio-temporal world modeling technique, a methodology and standard set of criteria must be established. To that end, state-of-the-art methods for this type of modeling will be collected, listed, and described. Existing methods used for evaluation will also be collected where possible.
Using the collected methods, new criteria and techniques will be devised to enable the comparison of various methods in a qualitative manner. Experiments will be proposed to further narrow and ultimately select a spatio-temporal model for a given purpose. An example network of autonomous logistic robots, ROPOD, will serve as a case study used to demonstrate the use of the new criteria. This will also serve to guide the design of future experiments that aim to select a spatio-temporal world modeling technique for a given task. ROPOD was specifically selected as it operates in a real-world, human shared environment. This type of environment is desirable for experiments as it provides a unique combination of common and novel problems that arise when selecting an appropriate spatio-temporal world model. Using the developed criteria, a qualitative analysis will be applied to the selected methods to remove unfit options.
Then, experiments will be run on the remaining methods to provide comparative benchmarks. Finally, the results will be analyzed and recommendations to ROPOD will be made.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
TREE Jahresbericht 2018
(2019)
Analytical pyrolysis
(2019)
Analytical pyrolysis deals with the structural identification and quantitation of pyrolysis products with the ultimate aim of establishing the identity of the original material and the mechanisms of its thermal decomposition. The pyrolytic process is carried out in a pyrolyzer interfaced with analytical instrumentation such as gas chromatography (GC), mass spectrometry (MS), gas chromatography coupled with mass spectrometry (GC/MS), or with Fourier-transform infrared spectroscopy (GC/FTIR). By measurement and identification of pyrolysis products, the molecular composition of the original sample can often be reconstructed.This book is the outcome of contributions by experts in the field of pyrolysis and includes applications of the analytical pyrolysis-GC/MS to characterize the structure of synthetic organic polymers and lignocellulosic materials as well as cellulosic pulps and isolated lignins, solid wood, waste particle board, and bio-oil. The thermal degradation of cellulose and biomass is examined by scanning electron micrography, FTIR spectroscopy, thermogravimetry (TG), differential thermal analysis, and TG/MS. The calorimetric determination of high heating values of different raw biomass, plastic waste, and biomass/plastic waste mixtures and their by-products resulting from pyrolysis is described.
Mass Spectrometry: Pyrolysis
(2019)
Estimating the impact of successful completion of vocational education on employment outcomes
(2019)
Luxusgut Wohnen
(2019)
CSR-Erfolgssteuerung
(2019)
Das Lehrbuch behandelt den CSR-Reformprozess, der Unternehmen zur globalen Sorgfaltspflicht (Due Diligence) auffordert. Die CSR-Berichterstattungpflicht, die Vergaberechtsreform und die Aufforderung zur Implementierung von Risikomanagementsystemen treffen dabei nicht nur große, sondern insbesondere auch mittlere und kleine Unternehmen (KMU). Das Buch soll daher die CSR-Relevanz für Unternehmen aller Größen transparent machen und Umsetzungsblockaden und -hemmnisse abbauen.
Die letzten zwei Jahrzehnte wurden durch das exponentielle Wachstum der zur Verfügung stehenden Daten geprägt. Täglich produzieren Menschen und Maschinen mehr und mehr Daten, die oftmals in verteilten Datenspeichern abgelegt werden. Anwendungsgebiete lassen sich beispielsweise in der Physik und Astronomie finden, wo immense Datenmengen von Teilchenbeschleunigern oder Satelliten erzeugt werden, die gespeichert und verarbeitet werden müssen. Aus diesen Datenmengen können weder vom Menschen direkt noch durch traditionelle Analysemethoden neue Erkenntnisse gewonnen werden. Zur Verarbeitung dieser Datenmassen sind parallele sowie verteilte Datenanalyseverfahren notwendig. [MTT18,NEKH+18]
Gas Chromatography
(2019)
Gas chromatography (GC) is one of the most important types of chromatography used in analytical chemistry for separating and analyzing chemical organic compounds. Today, gas chromatography is one of the most widespread investigation methods of instrumental analysis. This technique is used in the laboratories of chemical, petrochemical, and pharmaceutical industries, in research institutes, and also in clinical, environmental, and food and beverage analysis. This book is the outcome of contributions by experts in the field of gas chromatography and includes a short history of gas chromatography, an overview of derivatization methods and sample preparation techniques, a comprehensive study on pyrazole mass spectrometric fragmentation, and a GC/MS/MS method for the determination and quantification of pesticide residues in grape samples.
Data-Driven Robot Fault Detection and Diagnosis Using Generative Models: A Modified SFDD Algorithm
(2019)
This paper presents a modification of the data-driven sensor-based fault detection and diagnosis (SFDD) algorithm for online robot monitoring. Our version of the algorithm uses a collection of generative models, in particular restricted Boltzmann machines, each of which represents the distribution of sliding window correlations between a pair of correlated measurements. We use such models in a residual generation scheme, where high residuals generate conflict sets that are then used in a subsequent diagnosis step. As a proof of concept, the framework is evaluated on a mobile logistics robot for the problem of recognising disconnected wheels, such that the evaluation demonstrates the feasibility of the framework (on the faulty data set, the models obtained 88.6% precision and 75.6% recall rates), but also shows that the monitoring results are influenced by the choice of distribution model and the model parameters as a whole.
Tell Your Robot What To Do: Evaluation of Natural Language Models for Robot Command Processing
(2019)
The use of natural language to indicate robot tasks is a convenient way to command robots. As a result, several models and approaches capable of understanding robot commands have been developed, which however complicates the choice of a suitable model for a given scenario. In this work, we present a comparative analysis and benchmarking of four natural language understanding models - Mbot, Rasa, LU4R, and ECG. We particularly evaluate the performance of the models to understand domestic service robot commands by recognizing the actions and any complementary information in them in three use cases: the RoboCup@Home General Purpose Service Robot (GPSR) category 1 contest, GPSR category 2, and hospital logistics in the context of the ROPOD project.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
Trust is the lubricant of the sharing economy, especially in peer-to-peer carsharing where you leave a valuable good to a stranger in the hope of getting it backunscathed. Central mechanisms for handling this information gap nowadays are ratings and reviews of other users. The rising of connected car technology opens new possibilities to increase trust by collecting and providing e.g. driving behavior data. At the same time, this means an intrusion into the privacy of the user. Therefore, in this work we explore technological approaches that allow building trust without violating the privacy of individuals. We evaluate to what extent blockchain technology and smart contracts are suitable technologies to meet these challengesby setting upa prototype implementation of a block-chain-based carsharing approach. In this context, we present our research approachand evaluate the prototype in terms of trust and privacy.
For robots acting - and failing - in everyday environments, a predictable behaviour representation is important so that it can be utilised for failure analysis, recovery, and subsequent improvement. Learning from demonstration combined with dynamic motion primitives is one commonly used technique for creating models that are easy to analyse and interpret; however, mobile manipulators complicate such models since they need the ability to synchronise arm and base motions for performing purposeful tasks. In this paper, we analyse dynamic motion primitives in the context of a mobile manipulator - a Toyota Human Support Robot (HSR)- and introduce a small extension of dynamic motion primitives that makes it possible to perform whole body motion with a mobile manipulator. We then present an extensive set of experiments in which our robot was grasping various everyday objects in a domestic environment, where a sequence of object detection, pose estimation, and manipulation was required for successfully completing the task. Our experiments demonstrate the feasibility of the proposed whole body motion framework for everyday object manipulation, but also illustrate the necessity for highly adaptive manipulation strategies that make better use of a robot's perceptual capabilities.
PosturePairsDB19
(2019)
The application of Raman and infrared (IR) microspectroscopy is leading to hyperspectral data containing complementary information concerning the molecular composition of a sample. The classification of hyperspectral data from the individual spectroscopic approaches is already state-of-the-art in several fields of research. However, more complex structured samples and difficult measuring conditions might affect the accuracy of classification results negatively and could make a successful classification of the sample components challenging. This contribution presents a comprehensive comparison in supervised pixel classification of hyperspectral microscopic images, proving that a combined approach of Raman and IR microspectroscopy has a high potential to improve classification rates by a meaningful extension of the feature space. It shows that the complementary information in spatially co-registered hyperspectral images of polymer samples can be accessed using different feature extraction methods and, once fused on the feature-level, is in general more accurately classifiable in a pattern recognition task than the corresponding classification results for data derived from the individual spectroscopic approaches.
Der Unfallversicherungsträger kann bei der Ausgestaltung des Beitragsausgleichsverfahrens nach § 162 Abs. 1 SGB VII die dort genannten Berechnungselemente (Zahl, Schwere oder Aufwendungen der Versicherungsfälle) alternativ oder in Kombination miteinander verwenden und eine geänderte Satzungsregelung auch auf Unfälle anwenden, deren Folgen erst im Beitragsjahr die maßgeblichen Merkmale erfüllen.
Herein we report an update to ACPYPE, a Python3 tool that now properly converts AMBER to GROMACS topologies for force fields that utilize nondefault and nonuniform 1–4 electrostatic and nonbonded scaling factors or negative dihedral force constants. Prior to this work, ACPYPE only converted AMBER topologies that used uniform, default 1–4 scaling factors and positive dihedral force constants. We demonstrate that the updated ACPYPE accurately transfers the GLYCAM06 force field from AMBER to GROMACS topology files, which employs non-uniform 1–4 scaling factors as well as negative dihedral force constants. Validation was performed using β-d-GlcNAc through gas-phase analysis of dihedral energy curves and probability density functions. The updated ACPYPE retains all of its original functionality, but now allows the simulation of complex glycomolecular systems in GROMACS using AMBER-originated force fields. ACPYPE is available for download at https://github.com/alanwilter/acpype.
This work aims to create a natural language generation (NLG) base for further development of systems for automatic examination questions generation and automatic summarization in Hochschule Bonn-Rhein-Sieg and Fraunhofer IAIS, respectively. Nowadays both tasks are very relevant. The first can significantly simplify the university teachers' work and the second to be of assistance for a faster retrieval of knowledge from an excessively large amount of information that people often work with. We focus on the search for an efficient and robust approach to the controlled NLG problem. Therefore, though the initial idea of the project was the usage of the generative adversarial neural networks (GANs), we switched our attention to more robust and easily-controllable autoencoders. Thus, in this work we implement an autoencoder for unsupervised discovery of latent space representations of text, and show the ability of the system to generate new sentences based on this latent space. Apart from that, we apply Gaussian mixture techniques in order to obtain meaningful text clusters and thereby try to create a tool that would allow us to generate sentences relevant to the semantics of the Gaussian clusters, e.g. positive or negative reviews or examination questions on certain topic. The developed system is tested on several datasets and compared to GANs' performance.
Ereignet sich ein Unfall auf dem Heimweg vom Arbeitsplatz, während die betroffene Arbeitnehmerin mit einem Mobiltelefon telefoniert (sogenannte „gemischte Tätigkeit“), so scheidet ein Wegeunfall dann aus, wenn die Unfallentstehung überwiegend dem Telefonieren zuzurechnen ist und dieses damit für die Unfallentstehung wesentlich war.
The number of studies on work breaks and the importance of this subject is growing rapidly, with research showing that work breaks increase employees’ wellbeing and performance and workplace safety. However, comparing the results of work break research is difficult since the study designs and methods are heterogeneous and there is no standard theoretical model for work breaks. Based on a systematic literature search, this scoping review included a total of 93 studies on experimental work break research conducted over the last 30 years. This scoping review provides a first structured evaluation regarding the underlying theoretical framework, the variables investigated, and the measurement methods applied. Studies using a combination of measurement methods from the categories “self-report measures,” “performance measures,” and “physiological measures” are most common and to be preferred in work break research. This overview supplies important information for ergonomics researchers allowing them to design work break studies with a more structured and stronger theory-based approach. A standard theoretical model for work breaks is needed in order to further increase the comparability of studies in the field of experimental work break research in the future.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Treatment options for acute myeloid leukemia (AML) remain extremely limited and associated with significant toxicity. Nicotinamide phosphoribosyltransferase (NAMPT) is involved in the generation of NAD+ and a potential therapeutic target in AML. We evaluated the effect of KPT-9274, a p21-activated kinase 4/NAMPT inhibitor that possesses a unique NAMPT-binding profile based on in silico modeling compared with earlier compounds pursued against this target. KPT-9274 elicited loss of mitochondrial respiration and glycolysis and induced apoptosis in AML subtypes independent of mutations and genomic abnormalities. These actions occurred mainly through the depletion of NAD+, whereas genetic knockdown of p21-activated kinase 4 did not induce cytotoxicity in AML cell lines or influence the cytotoxic effect of KPT-9274. KPT-9274 exposure reduced colony formation, increased blast differentiation, and diminished the frequency of leukemia-initiating cells from primary AML samples; KPT-9274 was minimally cytotoxic toward normal hematopoietic or immune cells. In addition, KPT-9274 improved overall survival in vivo in 2 different mouse models of AML and reduced tumor development in a patient-derived xenograft model of AML. Overall, KPT-9274 exhibited broad preclinical activity across a variety of AML subtypes and warrants further investigation as a potential therapeutic agent for AML.
This paper proposes an approach to an ANN-based temperature controller design for a plastic injection moulding system. This design approach is applied to the development of a controller based on a combination of a classical ANN and integrator. The controller provides a fast temperature response and zero steady-state error for three typical heaters (bar, nozzle, and cartridge) for a plastic moulding system. The simulation results in Matlab Simulink software and in comparison to an industrial PID regulator have shown the advantages of the controller, such as significantly less overshoot and faster transient (compared to PID with autotuning) for all examined heaters. In order to verify the proposed approach, the designed ANN controller was implemented and tested using an experimental setup based on an STM32 board.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
Das Forschungsprojekt beruht auf zwei Elementen: Die erste Untersuchung, ein Verhaltensexperiment mit 35 Studierenden der Hochschule Bonn-Rhein-Sieg, erforschte den Einfluss von Gruppengröße (Zuschauereffekt) und dargebotenen Informationen zu Verantwortungsdiffusion (Priming) auf nachhaltiges Verhalten. Mithilfe eines zweiten Online-Experiments folgte eine Erhebung zum Einfluss von wahrgenommener persönlicher Bedrohung auf die Bereitschaft zu nachhaltigem Verhalten (N = 72). Die Ergebnisse des ersten Experimentes zeigen einen schwachen, statistisch nicht signifikanten Einfluss der Gruppengröße sowie einen, z.T. statistisch signifikanten, Einfluss der dargebotenen Informationen zu Verantwortungsdiffusion auf das gemessene nachhaltige Verhalten. Bequemlichkeit sowie monetärer Aufwand stellen mit Abstand die größten Hindernisse für nachhaltiges Verhalten dar, während die Beeinflussung durch andere und das Ziel des Umweltschutzes als positive Argumente für nachhaltiges Verhalten genannt wurden. In der Folgestudie konnte ein statistisch signifikanter kausaler Zusammenhang zwischen der wahrgenommenen persönlichen Bedrohung durch die aktuelle Umwelt- und Klimasituation und der Bereitschaft zu nachhaltigem Verhalten nachgewiesen werden. Alle Resultate zu Verhaltensintentionen zeigten insgesamt eine hohe Bereitschaft der Probanden zu nachhaltigem Verhalten.
Neural network based object detectors are able to automatize many difficult, tedious tasks. However, they are usually slow and/or require powerful hardware. One main reason is called Batch Normalization (BN) [1], which is an important method for building these detectors. Recent studies present a potential replacement called Self-normalizing Neural Network (SNN) [2], which at its core is a special activation function named Scaled Exponential Linear Unit (SELU). This replacement seems to have most of BNs benefits while requiring less computational power. Nonetheless, it is uncertain that SELU and neural network based detectors are compatible with one another. An evaluation of SELU incorporated networks would help clarify that uncertainty. Such evaluation is performed through series of tests on different neural networks. After the evaluation, it is concluded that, while indeed faster, SELU is still not as good as BN for building complex object detector networks.
The paper presents the topological reduction method applied to gas transport networks, using contraction of series, parallel and tree-like subgraphs. The contraction operations are implemented for pipe elements, described by quadratic friction law. This allows significant reduction of the graphs and acceleration of solution procedure for stationary network problems. The algorithm has been tested on several realistic network examples. The possible extensions of the method to different friction laws and other elements are discussed.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Dieses Buch enthält die wichtigsten statistischen Instrumente und Formeln, die Sie in den Wirtschafts- und Sozialwissenschaften benötigen. Besonderer Wert wird darauf gelegt, dass Sie jede einzelne Formel verstehen und anwenden können. Zu jeder Formel finden Sie deshalb eine Erläuterung der Anwendungsfälle, eine detaillierte Beschreibung der einzelnen Symbole in der Formel und der notwendigen Rechenschritte, ein Anwendungsbeispiel mit vollständigem und erläutertem Rechenweg sowie eine Interpretation des jeweiligen Ergebnisses.
Incoming solar radiation is an important driver of our climate and weather. Several studies (see for instance Frank et al. 2018) have revealed discrepancies between ground-based irradiance measurements and the predictions of regional weather models. In the realm of electricity generation, accurate forecasts of solar photovoltaic (PV)energy yield are becoming indispensable for cost-effective grid operation: in Germany there are 1.6 million PVsystems installed, with a nominal power of 46 GW (Bundesverband Solarwirtschaft 2019). The proliferation of PV systems provides a unique opportunity to characterise global irradiance with unprecedented spatiotemporalresolution, which in turn will allow for highly resolved PV power forecasts.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
ITS Jahresbericht 2018
(2019)
Opportunities for Sustainable Mobility: Re-thinking Eco-feedback from a Citizen's Perspective
(2019)
In developed nations, a growing emphasis is being placed on the promotion of sustainable behaviours amongst individuals, or ‘citizen-consumers’. In HCI, various eco-feedback tools have been designed as persuasive instruments, with a strong normative appeal geared to encouraging citizens to conduct a more sustainable mobility. However, many critiques have been formulated regarding this ‘paternalistic’ stance. In this paper, we switched the perspective from a designer’s to a citizen’s point of view and explored how people would use eco-feedback tools to support sustainable mobility in their city. In the study, we conducted 14 interviews with citizens who had used eco-feedback previously. The findings indicate new starting points that could inform future eco-feedback tools. These encompass: (1) better information regarding how sustainable mobility is measured and monitored; (2) respect for individual mobility situations and preferences; and (3) the scope for participation and the sharing of responsibility between citizens and municipal city services.
When developing robot functionalities, finite state machines are commonly used due to their straightforward semantics and simple implementation. State machines are also a natural implementation choice when designing robot experiments, as they generally lead to reproducible program execution. In practice, the implementation of state machines can lead to significant code repetition and may necessitate unnecessary code interaction when reparameterisation is required. In this paper, we present a small Python library that allows state machines to be specified, configured, and dynamically created using a minimal domain-specific language. We illustrate the use of the library in three different use cases - scenario definition in the context of the RoboCup@Home competition, experiment design in the context of the ROPOD project, as well as specification transfer between robots.
In the field of domestic service robots, recovery from faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment. Even a well-modelled robot may fail to perform its tasks successfully due to external faults which occur because of an infinite number of unforeseeable and unmodelled situations. Through investigating the most frequent failures in typical scenarios which have been observed in real-world demonstrations and competitions using the autonomous service robots Care-O-Bot III and youBot, we identified four different fault classes caused by disturbances, imperfect perception, inadequate planning operator or chaining of action sequences. This thesis then presents two approaches to handle external faults caused by insufficient knowledge about the preconditions of the planning operator. The first approach presents reasoning on detected external faults using knowledge about naive physics. The naive physics knowledge is represented by the physical properties of objects which are formalized in a logical framework. The proposed approach applies a qualitative version of physical laws to these properties in order to reason. By interpreting the reasoning results the robot identifies the information about the situations which can cause the fault. Applying this approach to simple manipulation tasks like picking and placing objects show that naive physics holds great possibilities for reasoning on unknown external faults in robotics. The second approach includes missing knowledge about the execution of an action through learning by experimentation. Firstly, it investigates such representation of execution specific knowledge that can be learned for one particular situation and reused for situations which deviate from the original. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. This representation is called action execution model (AEM) here. The approach provides a learning strategy which uses a physical simulation for generating the training data to learn both symbolic and geometric aspects of the model. The experimental analysis, performed on two physical robots, shows that AEM can reliably describe execution specific knowledge and thereby serving as a potential model for avoiding the occurrence of external faults.
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution.
In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model.
In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
Interactive Object Detection
(2019)
The success of state-of-the-art object detection methods depend heavily on the availability of a large amount of annotated image data. The raw image data available from various sources are abundant but non-annotated. Annotating image data is often costly, time-consuming or needs expert help. In this work, a new paradigm of learning called Active Learning is explored which uses user interaction to obtain annotations for a subset of the dataset. The goal of active learning is to achieve superior object detection performance with images that are annotated on demand. To realize active learning method, the trade-off between the effort to annotate (annotation cost) unlabeled data and the performance of object detection model is minimised.
Random Forests based method called Hough Forest is chosen as the object detection model and the annotation cost is calculated as the predicted false positive and false negative rate. The framework is successfully evaluated on two Computer Vision benchmark and two Carl Zeiss custom datasets. Also, an evaluation of RGB, HoG and Deep features for the task is presented.
Experimental results show that using Deep features with Hough Forest achieves the maximum performance. By employing Active Learning, it is demonstrated that performance comparable to the fully supervised setting can be achieved by annotating just 2.5% of the images. To this end, an annotation tool is developed for user interaction during Active Learning.
The design of self-driving cars is one of the most exciting and ambitious challenges of our days and every day, new research work is published. In order to give an orientation, this article will present an overview of various methods used to study the human side of autonomous driving. Simplifying roughly, you can distinguish between design science-oriented methods (such as Research through Design, Wizard of Oz or driving simulator ) and behavioral science methods (such as survey, interview, and observation). We show how these methods are adopted in the context of autonomous driving research and dis-cuss their strengths and weaknesses. Due to the complexity of the topic, we will show that mixed method approaches will be suitable to explore the impact of autonomous driving on different levels: the individual, the social interaction and society.
Background 3-hydroxy-3-methylglutaryl-coenzyme A lyase deficiency (HMGCLD) is an autosomal recessive disorder of ketogenesis and leucine degradation due to mutations in HMGCL .
Method We performed a systematic literature search to identify all published cases. 211 patients of whom relevant clinical data were available were included in this analysis. Clinical course, biochemical findings and mutation data are highlighted and discussed. An overview on all published HMGCL variants is provided.
Results More than 95% of patients presented with acute metabolic decompensation. Most patients manifested within the first year of life, 42.4% already neonatally. Very few individuals remained asymptomatic. The neurologic long-term outcome was favorable with 62.6% of patients showing normal development.
Conclusion This comprehensive data analysis provides a systematic overview on all published cases with HMGCLD including a list of all known HMGCL mutations.
2-methylacetoacetyl-coenzyme A thiolase (beta-ketothiolase) deficiency: one disease - two pathways
(2019)
Background: 2-methylacetoacetyl-coenzyme A thiolase deficiency (MATD; deficiency of mitochondrial acetoacetyl-coenzyme A thiolase T2/ “beta-ketothiolase”) is an autosomal recessive disorder of ketone body utilization and isoleucine degradation due to mutations in ACAT1.
Methods: We performed a systematic literature search for all available clinical descriptions of patients with MATD. 244 patients were identified and included in this analysis. Clinical course and biochemical data are presented and discussed.
Results: For 89.6 % of patients at least one acute metabolic decompensation was reported. Age at first symptoms ranged from 2 days to 8 years (median 12 months). More than 82% of patients presented in the first two years of life, while manifestation in the neonatal period was the exception (3.4%). 77.0% (157 of 204 patients) of patients showed normal psychomotor development without neurologic abnormalities.
Conclusion: This comprehensive data analysis provides a systematic overview on all cases with MATD identified in the literature. It demonstrates that MATD is a rather benign disorder with often favourable outcome, when compared with many other organic acidurias.
Förderpreise 2018
(2019)
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
Atmospheric aerosols affect the power production of solar energy systems. Their impact depends on both the atmospheric conditions and the solar technology employed. By being a region with a lack in power production and prone to high solar insolation, West Africa shows high potential for the application of solar power systems. However, dust outbreaks, containing high aerosol loads, occur especially in the Sahel, located between the Saharan desert in the north and the Sudanian Savanna in the south. They might affect the whole region for several days with significant effects on power generation. This study investigates the impact of atmospheric aerosols on solar energy production for the example year 2006 making use of six well instrumented sites in West Africa. Two different solar power technologies, a photovoltaic (PV) and a parabolic through (PT) power plant, are considered. The daily reduction of solar power due to aerosols is determined over mostly clear-sky days in 2006 with a model chain combining radiative transfer and technology specific power generation. For mostly clear days the local daily reduction of PV power (at alternating current) (PVAC) and PT power (PTP) due to the presence of aerosols lies between 13 % and 22 % and between 22 % and 37 %, respectively. In March 2006 a major dust outbreak occurred, which serves as an example to investigate the impact of an aerosol extreme event on solar power. During the dust outbreak, daily reduction of PVAC and PTP of up to 79 % and 100 % occur with a mean reduction of 20 % to 40 % for PVAC and of 32 % to 71 % for PTP during the 12 days of the event.