Refine
H-BRS Bibliography
- yes (69)
Departments, institutes and facilities
- Fachbereich Informatik (69) (remove)
Document Type
- Conference Object (37)
- Article (18)
- Doctoral Thesis (3)
- Preprint (3)
- Report (3)
- Part of a Book (2)
- Book (monograph, edited volume) (1)
- Research Data (1)
- Master's Thesis (1)
Year of publication
- 2019 (69) (remove)
Keywords
- Navigation (3)
- Drosophila (2)
- Hyperspectral image (2)
- Raman microscopy (2)
- Ray tracing (2)
- UAV (2)
- Virtual Reality (2)
- aerodynamics (2)
- dynamic vector fields (2)
- flight zone (2)
- geofence (2)
- image fusion (2)
- pansharpening (2)
- simulation (2)
- 802.11 (1)
- ACPYPE (1)
- Active Learning (1)
- Antifuse memory (1)
- Aufrecht (1)
- Augmented Reality (1)
- Augmented reality (1)
- Ball Tracking (1)
- Ball tracking (1)
- Benchmark (1)
- Benchmarking (1)
- Bond graph (1)
- CIBERSORT (1)
- Carbohydrate (1)
- Chemical imaging (1)
- Chip ID (1)
- Common Criteria (1)
- Comparative Analysis (1)
- Comparative analysis (1)
- Computer graphics (1)
- Computergrafik (1)
- Cooperative Intelligent Transport Systems (ITS) (1)
- Counterfeit protection (1)
- Customization (1)
- Cutting sticks problem (1)
- DFA Lab (1)
- DPA Lab (1)
- Directional antennas (1)
- Distributed Robot Systems (1)
- Dynamic motion primitives (1)
- EM leakage (1)
- Enterprise-Resource-Planning (1)
- Everyday object manipulation (1)
- Ewing´s Sarcoma Family of Tumors (1)
- Exergame (1)
- Experiment design (1)
- Fault Channel Watermarking Lab (1)
- Fault analysis (1)
- Field sequential imaging (1)
- Flexible robots (1)
- Force and tactile sensing (1)
- Force field (1)
- Forschungsbericht (1)
- Glycam06 (1)
- Gravitation (1)
- Gromacs (1)
- Group behavior analysis (1)
- HIF1α (1)
- Head Mounted Display (1)
- Healthcare logistics (1)
- Heart Rate Prediction (1)
- Hough Forests (1)
- IC identification (1)
- ISO 27000 (1)
- IT-Sicherheitsanforderungen (1)
- Image-based rendering (1)
- Immersive analytics (1)
- Indirect Encodings (1)
- Instantaneous assignment (1)
- Interactive Object Detection (1)
- Interference (1)
- Künstliche Gravitation (1)
- Large display interaction (1)
- Lead userness (1)
- Leakage circuits (1)
- Learning from demonstration (1)
- Long-Term Autonomy (1)
- MAP-Elites (1)
- MOOC (1)
- Measurement (1)
- Middleware and Programming Environments (1)
- Multi-Modal Interaction (1)
- Multi-robot systems (1)
- Multilayer interaction (1)
- Multimodal hyperspectral data (1)
- Natural language understanding (1)
- Networked Robots (1)
- Neuroevolution (1)
- Nonbonded scaling factor (1)
- Object Detection (1)
- Open Source (1)
- Open source firmware (1)
- Open source software (1)
- Optical Flow (1)
- Optical flow (1)
- Pattern recognition (1)
- Perception (1)
- Perceptual Upright (1)
- Prediction of physiological responses to strain (1)
- Quality Diversity (1)
- ROPOD (1)
- Rapid prototyping (1)
- Raumwahrnehmung (1)
- Real-Time Image Processing (1)
- Real-time image processing (1)
- Reference Architectural Model Automotive (RAMA) (1)
- Robot commands (1)
- Robotics (1)
- Schutzobjekte (1)
- Set partition problem (1)
- Sichere Kommunikation Kritische Infrastrukturen (1)
- Side Channel Watermarking Lab (1)
- Side channels (1)
- Side-channel analysis (1)
- Simulation (1)
- Software (1)
- Software IP protection (1)
- Spatio-Temporal (1)
- Spherical Treadmill (1)
- Spherical treadmill (1)
- State machines (1)
- Stream cipher (1)
- Supervised classification (1)
- Task allocation (1)
- Temporal constraints (1)
- Time extended assignment (1)
- Toyota HSR (1)
- User interfaces (1)
- Vehicle-2-Infrastructure Kommunikation (1)
- Vehicle-2-Vehicle Kommunikation (1)
- Vibrational microspectroscopy (1)
- Virtual reality (1)
- Virtuelle Realität (1)
- Visualization (1)
- Visuelle Wahrnehmung (1)
- V˙CO2 prediction (1)
- V˙O2 prediction (1)
- Wahrnehmung (1)
- Weltraumforschung (1)
- Whole body motion (1)
- Wi-Fi (1)
- WiAFirm (1)
- Zentrifuge (1)
- accelerometer (1)
- acute (1)
- allopurinol (1)
- anomaly detection (1)
- audio-tactile feedback (1)
- automated sensor-screening (1)
- back-of-device interaction (1)
- brightfield microscopy (1)
- childhood (1)
- closed kinematic chain (1)
- collaborative learning (1)
- cross-disciplinary (1)
- depth perception (1)
- digital platform ecosystem (1)
- diversity (1)
- drone video quality (1)
- external faults (1)
- fault handling (1)
- fitness-fatigue model (1)
- gamification (1)
- genes (1)
- genetics (1)
- guidance (1)
- haptic feedback (1)
- haptic interfaces (1)
- high degree of diagnostic coverage and reliability (1)
- human computer interaction (1)
- innovative work behavior (1)
- intercultural learning (1)
- international (1)
- international teams (1)
- learning-based fault detection and diagnosis (1)
- leukemia (1)
- lymphocytic (1)
- mathematical modeling (1)
- mobile applications (1)
- modelling methodology (1)
- motion estimation (1)
- multi-channel power sourcing (1)
- multibody system (1)
- multibond graphs (1)
- multidisciplinary (1)
- multiresolution analysis (1)
- mutation (1)
- naive physics (1)
- optical flow (1)
- pen interaction (1)
- performance modeling (1)
- performance prediction (1)
- peripheral vision (1)
- posture analysis (1)
- project-based learning (1)
- qualitative reasoning (1)
- reference dataset (1)
- requirements analysis (1)
- robotics (1)
- semiconducting metal oxide gas sensor array (1)
- sensemaking (1)
- sensor-based fault detection and diagnosis (1)
- serious games (1)
- service robots (1)
- spinal posture (1)
- stress detection (1)
- structural equation modeling (1)
- training performance relationship (1)
- tumor microenvironment (1)
- tumor-infiltrating immune cells (1)
- user study (1)
- wearable sensor (1)
- welfare technology (1)
Energy Profiles of the Ring Puckering of Cyclopentane, Methylcyclopentane and Ethylcyclopentane
(2019)
Emotion and gender recognition from facial features are important properties of human empathy. Robots should also have these capabilities. For this purpose we have designed special convolutional modules that allow a model to recognize emotions and gender with a considerable lower number of parameters, enabling real-time evaluation on a constrained platform. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset, while requiring a computation time of less than 0.008 seconds on a Core i7 CPU. All our code, demos and pre-trained architectures have been released under an open-source license in our repository at https://github.com/oarriaga/face classification.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
Computer graphics research strives to synthesize images of a high visual realism that are indistinguishable from real visual experiences. While modern image synthesis approaches enable to create digital images of astonishing complexity and beauty, processing resources remain a limiting factor. Here, rendering efficiency is a central challenge involving a trade-off between visual fidelity and interactivity. For that reason, there is still a fundamental difference between the perception of the physical world and computer-generated imagery. At the same time, advances in display technologies drive the development of novel display devices. The dynamic range, the pixel densities, and refresh rates are constantly increasing. Display systems enable a larger visual field to be addressed by covering a wider field-of-view, due to either their size or in the form of head-mounted devices. Currently, research prototypes are ranging from stereo and multi-view systems, head-mounted devices with adaptable lenses, up to retinal projection, and lightfield/holographic displays. Computer graphics has to keep step with, as driving these devices presents us with immense challenges, most of which are currently unsolved. Fortunately, the human visual system has certain limitations, which means that providing the highest possible visual quality is not always necessary. Visual input passes through the eye’s optics, is filtered, and is processed at higher level structures in the brain. Knowledge of these processes helps to design novel rendering approaches that allow the creation of images at a higher quality and within a reduced time-frame. This thesis presents the state-of-the-art research and models that exploit the limitations of perception in order to increase visual quality but also to reduce workload alike - a concept we call perception-driven rendering. This research results in several practical rendering approaches that allow some of the fundamental challenges of computer graphics to be tackled. By using different tracking hardware, display systems, and head-mounted devices, we show the potential of each of the presented systems. The capturing of specific processes of the human visual system can be improved by combining multiple measurements using machine learning techniques. Different sampling, filtering, and reconstruction techniques aid the visual quality of the synthesized images. An in-depth evaluation of the presented systems including benchmarks, comparative examination with image metrics as well as user studies and experiments demonstrated that the methods introduced are visually superior or on the same qualitative level as ground truth, whilst having a significantly reduced computational complexity.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews ofstatic scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path traced results, but with a greatly reduced computational complexity allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Currently, a variety of methods exist for creating different types of spatio-temporal world models. Despite the numerous methods for this type of modeling, there exists no methodology for comparing the different approaches or their suitability for a given application e.g. logistics robots. In order to establish a means for comparing and selecting the best-fitting spatio-temporal world modeling technique, a methodology and standard set of criteria must be established. To that end, state-of-the-art methods for this type of modeling will be collected, listed, and described. Existing methods used for evaluation will also be collected where possible.
Using the collected methods, new criteria and techniques will be devised to enable the comparison of various methods in a qualitative manner. Experiments will be proposed to further narrow and ultimately select a spatio-temporal model for a given purpose. An example network of autonomous logistic robots, ROPOD, will serve as a case study used to demonstrate the use of the new criteria. This will also serve to guide the design of future experiments that aim to select a spatio-temporal world modeling technique for a given task. ROPOD was specifically selected as it operates in a real-world, human shared environment. This type of environment is desirable for experiments as it provides a unique combination of common and novel problems that arise when selecting an appropriate spatio-temporal world model. Using the developed criteria, a qualitative analysis will be applied to the selected methods to remove unfit options.
Then, experiments will be run on the remaining methods to provide comparative benchmarks. Finally, the results will be analyzed and recommendations to ROPOD will be made.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
This work presents the preliminary research towards developing an adaptive tool for fault detection and diagnosis of distributed robotic systems, using explainable machine learning methods. Autonomous robots are complex systems that require high reliability in order to operate in different environments. Even more so, when considering distributed robotic systems, the task of fault detection and diagnosis becomes exponentially difficult.
To diagnose systems, models representing the behaviour under investigation need to be developed, and with distributed robotic systems generating large amount of data, machine learning becomes an attractive method of modelling especially because of its high performance. However, with current day methods such as artificial neural networks (ANNs), the issue of explainability arises where learnt models lack the ability to give explainable reasons behind their decisions.
This paper presents current trends in methods for data collection from distributed systems, inductive logic programming (ILP); an explainable machine learning method, and fault detection and diagnosis.
In the field of service robots, dealing with faults is crucial to promote user acceptance. In this context, this work focuses on some specific faults which arise from the interaction of a robot with its real world environment due to insufficient knowledge for action execution.
In our previous work [1], we have shown that such missing knowledge can be obtained through learning by experimentation. The combination of symbolic and geometric models allows us to represent action execution knowledge effectively. However we did not propose a suitable representation of the symbolic model.
In this work we investigate such symbolic representation and evaluate its learning capability. The experimental analysis is performed on four use cases using four different learning paradigms. As a result, the symbolic representation together with the most suitable learning paradigm are identified.
In Sensor-based Fault Detection and Diagnosis (SFDD) methods, spatial and temporal dependencies among the sensor signals can be modeled to detect faults in the sensors, if the defined dependencies change over time. In this work, we model Granger causal relationships between pairs of sensor data streams to detect changes in their dependencies. We compare the method on simulated signals with the Pearson correlation, and show that the method elegantly handles noise and lags in the signals and provides appreciable dependency detection. We further evaluate the method using sensor data from a mobile robot by injecting both internal and external faults during operation of the robot. The results show that the method is able to detect changes in the system when faults are injected, but is also prone to detecting false positives. This suggests that this method can be used as a weak detection of faults, but other methods, such as the use of a structural model, are required to reliably detect and diagnose faults.
Quantifying Interference in WiLD Networks using Topography Data and Realistic Antenna Patterns
(2019)
Avoiding possible interference is a key aspect to maximize the performance in Wi-Fi based Long Distance networks. In this paper we quantify self-induced interference based on data derived from our testbed and match the findings against simulations. By enhancing current simulation models with two key elements we significantly reduce the deviation between testbed and simulation: the usage of detailed antenna patterns compared to the cone model and propagation modeling enhanced by license-free topography data. Based on the gathered data we discuss several possible optimization approaches such as physical separation of local radios, tuning the sensitivity of the transmitter and using centralized compared to distributed channel assignment algorithms. While our testbed is based on 5 GHz Wi-Fi, we briefly discuss the possible impact of our results to other frequency bands.
Survival of patients with pediatric acute lymphoblastic leukemia (ALL) after allogeneic hematopoietic stem cell transplantation (allo-SCT) is mainly compromised by leukemia relapse, carrying dismal prognosis. As novel individualized therapeutic approaches are urgently needed, we performed whole-exome sequencing of leukemic blasts of 10 children with post–allo-SCT relapses with the aim of thoroughly characterizing the mutational landscape and identifying druggable mutations. We found that post–allo-SCT ALL relapses display highly diverse and mostly patient-individual genetic lesions. Moreover, mutational cluster analysis showed substantial clonal dynamics during leukemia progression from initial diagnosis to relapse after allo-SCT. Only very few alterations stayed constant over time. This dynamic clonality was exemplified by the detection of thiopurine resistance-mediating mutations in the nucleotidase NT5C2 in 3 patients’ first relapses, which disappeared in the post–allo-SCT relapses on relief of selective pressure of maintenance chemotherapy. Moreover, we identified TP53 mutations in 4 of 10 patients after allo-SCT, reflecting acquired chemoresistance associated with selective pressure of prior antineoplastic treatment. Finally, in 9 of 10 children’s post–allo-SCT relapse, we found alterations in genes for which targeted therapies with novel agents are readily available. We could show efficient targeting of leukemic blasts by APR-246 in 2 patients carrying TP53 mutations. Our findings shed light on the genetic basis of post–allo-SCT relapse and may pave the way for unraveling novel therapeutic strategies in this challenging situation.
Application developers constitute an important part of a digital platform’s ecosystem. Knowledge about psychological processes that drive developer behavior in platform ecosystems is scarce. We build on the lead userness construct which comprises two dimensions, trend leadership and high expected benefits from a solution, to explain how developers’ innovative work behavior (IWB) is stimulated. We employ an efficiencyoriented and a social-political perspective to investigate the relationship between lead userness and IWB. The efficiency-oriented view resonates well with the expected benefit dimension of lead userness, while the social-political view might be interpreted as a reflection of trend leadership. Using structural equation modeling, we test our model with a sample of over 400 developers from three platform ecosystems. We find that lead userness is indirectly associated with IWB and the performance-enhancing view to be the stronger predictor of IWB. Finally, we unravel differences between paid and unpaid app developers in platform ecosystems.
Data-Driven Robot Fault Detection and Diagnosis Using Generative Models: A Modified SFDD Algorithm
(2019)
This paper presents a modification of the data-driven sensor-based fault detection and diagnosis (SFDD) algorithm for online robot monitoring. Our version of the algorithm uses a collection of generative models, in particular restricted Boltzmann machines, each of which represents the distribution of sliding window correlations between a pair of correlated measurements. We use such models in a residual generation scheme, where high residuals generate conflict sets that are then used in a subsequent diagnosis step. As a proof of concept, the framework is evaluated on a mobile logistics robot for the problem of recognising disconnected wheels, such that the evaluation demonstrates the feasibility of the framework (on the faulty data set, the models obtained 88.6% precision and 75.6% recall rates), but also shows that the monitoring results are influenced by the choice of distribution model and the model parameters as a whole.