Refine
H-BRS Bibliography
- yes (69) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (69) (remove)
Document Type
- Conference Object (37)
- Article (18)
- Doctoral Thesis (3)
- Preprint (3)
- Report (3)
- Part of a Book (2)
- Book (monograph, edited volume) (1)
- Research Data (1)
- Master's Thesis (1)
Year of publication
- 2019 (69) (remove)
Keywords
- Navigation (3)
- Drosophila (2)
- Hyperspectral image (2)
- Raman microscopy (2)
- Ray tracing (2)
- UAV (2)
- Virtual Reality (2)
- aerodynamics (2)
- dynamic vector fields (2)
- flight zone (2)
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
Herein we report an update to ACPYPE, a Python3 tool that now properly converts AMBER to GROMACS topologies for force fields that utilize nondefault and nonuniform 1–4 electrostatic and nonbonded scaling factors or negative dihedral force constants. Prior to this work, ACPYPE only converted AMBER topologies that used uniform, default 1–4 scaling factors and positive dihedral force constants. We demonstrate that the updated ACPYPE accurately transfers the GLYCAM06 force field from AMBER to GROMACS topology files, which employs non-uniform 1–4 scaling factors as well as negative dihedral force constants. Validation was performed using β-d-GlcNAc through gas-phase analysis of dihedral energy curves and probability density functions. The updated ACPYPE retains all of its original functionality, but now allows the simulation of complex glycomolecular systems in GROMACS using AMBER-originated force fields. ACPYPE is available for download at https://github.com/alanwilter/acpype.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
Bond graph software can simulate bond graph models without the user needing to manually derive equations. This offers the power to model larger and more complex systems than in the past. Multibond graphs (those with vector bonds) offer a compact model which further eases handling multibody systems. Although multibond graphs can be simulated successfully, the use of vector bonds can present difficulties. In addition, most qualitative, bond graph–based exploitation relies on the use of scalar bonds. This article discusses the main methods for simulating bond graphs of multibody systems, using a graphical software platform. The transformation between models with vector and scalar bonds is presented. The methods are then compared with respect to both time and accuracy, through simulation of two benchmark models. This article is a tutorial on the existing methods for simulating three-dimensional rigid and holonomic multibody systems using bond graphs and discusses the difficulties encountered. It then proposes and adapts methods for simulating this type of system directly from its bond graph within a software package. The value of this study is in giving practical guidance to modellers, so that they can implement the adapted method in software.
This work addresses the issue of finding an optimal flight zone for a side-by-side tracking and following Unmanned Aerial Vehicle(UAV) adhering to space-restricting factors brought upon by a dynamic Vector Field Extraction (VFE) algorithm. The VFE algorithm demands a relatively perpendicular field of view of the UAV to the tracked vehicle, thereby enforcing the space-restricting factors which are distance, angle and altitude. The objective of the UAV is to perform side-by-side tracking and following of a lightweight ground vehicle while acquiring high quality video of tufts attached to the side of the tracked vehicle. The recorded video is supplied to the VFE algorithm that produces the positions and deformations of the tufts over time as they interact with the surrounding air, resulting in an airflow model of the tracked vehicle. The present limitations of wind tunnel tests and computational fluid dynamics simulation suggest the use of a UAV for real world evaluation of the aerodynamic properties of the vehicle’s exterior. The novelty of the proposed approach is alluded to defining the specific flight zone restricting factors while adhering to the VFE algorithm, where as a result we were capable of formalizing a locally-static and a globally-dynamic geofence attached to the tracked vehicle and enclosing the UAV.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Verschiedene intelligente Heimautomatisierungsgeräte wie Lampen, Schlösser und Thermostate verbreiten sich rasant im privaten Umfeld. Ein typisches Kommunikationsprotokoll für diese Geräteklasse ist Bluetooth Low Energy (BLE). In dieser Arbeit wird eine strukturierte Sicherheitsanalyse für BLE vorgestellt. Die beschriebene Vorgehensweise kategorisiert bekannte Angriffsvektoren und beschreibt einen möglichen Aufbau für eine Analyse. Im Zuge dieser Arbeit wurden einige sicherheitsrelevante Probleme aufgedeckt, die es Angreifern ermöglichen die Geräte vollständig zu übernehmen. Es zeigte sich, dass im Standard vorgesehene Sicherheitsfunktionen wie Verschlüsselung und Integritätsprüfungen häufig gar nicht oder fehlerhaft implementiert sind.
Are quality diversity algorithms better at generating stepping stones than objective-based search?
(2019)
The route to the solution of complex design problems often lies through intermediate "stepping stones" which bear little resemblance to the final solution. By greedily following the path of greatest fitness improvement, objective-based search overlooks and discards stepping stones which might be critical to solving the problem. Here, we hypothesize that Quality Diversity (QD) algorithms are a better way to generate stepping stones than objective-based search: by maintaining a large set of solutions which are of high-quality, but phenotypically different, these algorithms collect promising stepping stones while protecting them in their own "ecological niche". To demonstrate the capabilities of QD we revisit the challenge of recreating images produced by user-driven evolution, a classic challenge which spurred work in novelty search and illustrated the limits of objective-based search. We show that QD far outperforms objective-based search in matching user-evolved images. Further, our results suggest some intriguing possibilities for leveraging the diversity of solutions created by QD.
The initially large number of variants is reduced by applying custom variant annotation and filtering procedures. This requires complex software toolchains to be set up and data sources to be integrated. Furthermore, increasing study sizes subsequently require higher efforts to manage datasets in a multi-user and multi-institution environment. It is common practice to expect numerous iterations of continuative respecification and refinement of filter strategies, when the cause for a disease or phenotype is unknown. Data analysis support during this phase is fundamental, because handling the large volume of data is not possible or inadequate for users with limited computer literacy. Constant feedback and communication is necessary when filter parameters are adjusted or the study grows with additional samples. Consequently, variant filtering and interpretation becomes time-consuming and hinders a dynamic and explorative data analysis by experts.
The initial phase in real world engineering optimization and design is a process of discovery in which not all requirements can be made in advance, or are hard to formalize. Quality diversity algorithms, which produce a variety of high performing solutions, provide a unique chance to support engineers and designers in the search for what is possible and high performing. In this work we begin to answer the question how a user can interact with quality diversity and turn it into an interactive innovation aid. By modeling a user's selection it can be determined whether the optimization is drifting away from the user's preferences. The optimization is then constrained by adding a penalty to the objective function. We present an interactive quality diversity algorithm that can take into account the user's selection. The approach is evaluated in a new multimodal optimization benchmark that allows various optimization tasks to be performed. The user selection drift of the approach is compared to a state of the art alternative on both a planning and a neuroevolution control task, thereby showing its limits and possibilities.
Surrogate models are used to reduce the burden of expensive-to-evaluate objective functions in optimization. By creating models which map genomes to objective values, these models can estimate the performance of unknown inputs, and so be used in place of expensive objective functions. Evolutionary techniques such as genetic programming or neuroevolution commonly alter the structure of the genome itself. A lack of consistency in the genotype is a fatal blow to data-driven modeling techniques: interpolation between points is impossible without a common input space. However, while the dimensionality of genotypes may differ across individuals, in many domains, such as controllers or classifiers, the dimensionality of the input and output remains constant. In this work we leverage this insight to embed differing neural networks into the same input space. To judge the difference between the behavior of two neural networks, we give them both the same input sequence, and examine the difference in output. This difference, the phenotypic distance, can then be used to situate these networks into a common input space, allowing us to produce surrogate models which can predict the performance of neural networks regardless of topology. In a robotic navigation task, we show that models trained using this phenotypic embedding perform as well or better as those trained on the weight values of a fixed topology neural network. We establish such phenotypic surrogate models as a promising and flexible approach which enables surrogate modeling even for representations that undergo structural changes.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
Survival of patients with pediatric acute lymphoblastic leukemia (ALL) after allogeneic hematopoietic stem cell transplantation (allo-SCT) is mainly compromised by leukemia relapse, carrying dismal prognosis. As novel individualized therapeutic approaches are urgently needed, we performed whole-exome sequencing of leukemic blasts of 10 children with post–allo-SCT relapses with the aim of thoroughly characterizing the mutational landscape and identifying druggable mutations. We found that post–allo-SCT ALL relapses display highly diverse and mostly patient-individual genetic lesions. Moreover, mutational cluster analysis showed substantial clonal dynamics during leukemia progression from initial diagnosis to relapse after allo-SCT. Only very few alterations stayed constant over time. This dynamic clonality was exemplified by the detection of thiopurine resistance-mediating mutations in the nucleotidase NT5C2 in 3 patients’ first relapses, which disappeared in the post–allo-SCT relapses on relief of selective pressure of maintenance chemotherapy. Moreover, we identified TP53 mutations in 4 of 10 patients after allo-SCT, reflecting acquired chemoresistance associated with selective pressure of prior antineoplastic treatment. Finally, in 9 of 10 children’s post–allo-SCT relapse, we found alterations in genes for which targeted therapies with novel agents are readily available. We could show efficient targeting of leukemic blasts by APR-246 in 2 patients carrying TP53 mutations. Our findings shed light on the genetic basis of post–allo-SCT relapse and may pave the way for unraveling novel therapeutic strategies in this challenging situation.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.