Refine
H-BRS Bibliography
- yes (178)
Departments, institutes and facilities
- Fachbereich Informatik (178) (remove)
Document Type
- Article (79)
- Report (44)
- Conference Object (32)
- Part of a Book (8)
- Master's Thesis (7)
- Preprint (5)
- Book (monograph, edited volume) (1)
- Conference Proceedings (1)
- Working Paper (1)
Year of publication
Has Fulltext
- yes (178) (remove)
Keywords
- Robotik (6)
- Usable Security (5)
- Big Data Analysis (4)
- Machine Learning (4)
- Risk-based Authentication (4)
- Virtuelle Realität (4)
- computer vision (4)
- 802.11 (3)
- Cutting sticks-Problem (3)
- Knowledge Graphs (3)
Bond graph modelling was devised by Professor Paynter at the Massachusetts Institute of Technology in 1959 and subsequently developed into a methodology for modelling multidisciplinary systems at a time when nobody was speaking of object-oriented modelling. On the other hand, so-called object-oriented modelling has become increasingly popular during the last few years. By relating the characteristics of both approaches, it is shown that bond graph modelling, although much older, may be viewed as a special form of object-oriented modelling. For that purpose the new object-oriented modelling language Modelica is used as a working language which aims at supporting multiple formalisms. Although it turns out that bond graph models can be described rather easily, it is obvious that Modelica started from generalized networks and was not designed to support bond graphs. The description of bond graph models in Modelica is illustrated by means of a hydraulic drive. Since VHDL-AMS as an important language standardized and supported by IEEE has been extended to support also modelling of non-electrical systems, it is briefly investigated as to whether it can be used for description of bond graphs. It turns out that currently it does not seem to be suitable.
Multidisciplinary systems are described most suitably by bond graphs. In order to determine unnormalized frequency domain sensitivities in symbolic form, this paper proposes to construct in a systematic manner a bond graph from another bond graph, which is called the associated incremental bond graph in this paper. Contrary to other approaches reported in the literature the variables at the bonds of the incremental bond graph are not sensitivities but variations (incremental changes) in the power variables from their nominal values due to parameter changes. Thus their product is power. For linear elements their corresponding model in the incremental bond graph also has a linear characteristic. By deriving the system equations in symbolic state space form from the incremental bond graph in the same way as they are derived from the initial bond graph, the sensitivity matrix of the system can be set up in symbolic form. Its entries are transfer functions depending on the nominal parameter values and on the nominal states and the inputs of the original model. The sensitivities can be determined automatically by the bond graph preprocessor CAMP-G and the widely used program MATLAB together with the Symbolic Toolbox for symbolic mathematical calculation. No particular program is needed for the approach proposed. The initial bond graph model may be non-linear and may contain controlled sources and multiport elements. In that case the sensitivity model is linear time variant and must be solved in the time domain. The rationale and the generality of the proposed approach are presented. For illustration purposes a mechatronic example system, a load positioned by a constant-excitation d.c. motor, is presented and sensitivities are determined in symbolic form by means of CAMP-G/MATLAB.
The development of mobile robotic systems is a demanding task regarding its complexity, required resources and skills in multiple fields such as software development, artificial intelligence, mechanical design, electrical engineering, signal processing, sensor technology or control theory. This holds true particularly for soccer playing robots, where additional aspects like high dynamics, cooperation and high physical stress have to be dealt with. In robot competitions such as RoboCup, additional skills in the domains of team, project and knowledge management are of importance.
This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV.
The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to “forget” about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.
This thesis introduces and demonstrates a novel method for learning qualitative models of the world by an autonomous robot. The method makes possible generation of qualitative models that can be used for prediction as well as directing the experiments to improve the model. The qualitative models form the knowledge representation of the robot and consists of qualitative trees and non-deterministic finite automaton. An efficient exploration algorithm that lets the robot collect the most relevant learning samples is also introduced. To demonstrate the use of the methodology, representation and algorithm, two experiments are described. The first experiment is conducted using a mobile robot and a ball, where the robot observes the ball and learns the effect of its actions on the observed attributes of the world. The second experiment is conducted using a mobile robot and five boxes, two non-movable boxes and three movable boxes. The robot experiments actively with the objects and observes the changes in the attributes of the world. The main difference with the two experiments is that the first one tries to learn by observation while the second tries to learn by experimentation. In both experiments the robot learns qualitative models from its actions and observations. Although the primary objective of the robot is to improve itself by being able to predict the outcome of its actions, the models Learned were also used at each step of the learning process to direct the experiments so that the model converges to the final model as quickly as possible.
The automated annotation of data from high throughput sequencing and genomics experiments is a significant challenge for bioinformatics. Most current approaches rely on sequential pipelines of gene finding and gene function prediction methods that annotate a gene with information from different reference data sources. Each function prediction method contributes evidence supporting a functional assignment. Such approaches generally ignore the links between the information in the reference datasets. These links, however, are valuable for assessing the plausibility of a function assignment and can be used to evaluate the confidence in a prediction. We are working towards a novel annotation system that uses the network of information supporting the function assignment to enrich the annotation process for use by expert curators and predicting the function of previously unannotated genes. In this paper we describe our success in the first stages of this development. We present the data integration steps that are needed to create the core database of integrated reference databases (UniProt, PFAM, PDB, GO and the pathway database Ara-Cyc) which has been established in the ONDEX data integration system. We also present a comparison between different methods for integration of GO terms as part of the function assignment pipeline and discuss the consequences of this analysis for improving the accuracy of gene function annotation. The methods and algorithms presented in this publication are an integral part of the ONDEX system which is freely available from http://ondex.sf.net/.
XPERSIF: a software integration framework & architecture for robotic learning by experimentation
(2008)
The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO.The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.
We have designed an inexpensive intelligent pedestrian counting system. The pedestrian counting system consists of several counters that can be connected together in a distributed fashion and communicate over the wireless channel. The motion pattern is recorded using a set of passive infrared (PIR) sensors. Each counter has one wireless sensor node that processes the PIR sensor data and transmits it to a base station. Then echo state network, a special kind of recurrent neural network, is used to predict the pedestrian count from the input pattern. The evaluation of the performance of such networks in a novel kind of application is one focus of this work. The counter gave a performance of 80.4% which is better than the commercially available low-priced pedestrian counters. The article reports the experiments we did for analyzing the counterperformance and lists the strengths and limitations of the current implementation. It will also report the preliminary test results obtained by substituting the PIR sensors with low-cost active IR distance sensors which can improve the counter performance further.
Autonomous mobile robots need internal environment representations or models of their environment in order to act in a goal-directed manner, plan actions and navigate effectively. Especially in those situations where a robot can not be provided with a manually constructed model or in environments that change over time, the robot needs to possess the ability of autonomously constructing models and maintaining these models on its own. To construct a model of an environment multiple sensor readings have to be acquired and integrated into a single representation. Where the robot has to take these sensor readings is determined by an exploration strategy. The strategy allows the robot to sense all environmental structures and to construct a complete model of its workspace. Given a complete environment model, the task of inspection is to guide the robot to all modeled environmental structures in order to detect changes and to update the model if necessary. Informally stated, exploration and inspection provide the means for acquiring as much information as possible by the robot itself. Both exploration and inspection are highly integrated problems. In addition to the according strategies, they require for several abilities of a robotic system and comprise various problems from the field of mobile robotics including Simultaneous localization and Mapping (SLAM), motion planning and control as well as reliable collision avoidance. The goal of this thesis is to develop and implement a complete system and a set of algorithms for robotic exploration and inspection. That is, instead of focussing on specific strategies, robotic exploration and inspection are addressed as the integrated problems that they are. Given the set of algorithms a real mobile service robot has to be able to autonomously explore its workspace, construct a model of its workspace and use this model in subsequent tasks e.g. for navigating in the workspace or inspecting the workspace itself. The algorithms need to be reliable, robust against environment dynamics and internal failures and applicable online in real-time on a real mobile robot. The resulting system should allow a mobile service robot to navigate effectively and reliably in a domestic environment and avoid all kinds of collisions. In the context of mobile robotics, domestic environments combine the characteristics of being cluttered, dynamic and populated by humans and domestic animals. SLAM is going to be addressed in terms of incremental range image registration which provides efficient means to construct internal environment representations online while moving through the environment. Two registration algorithms are presented that can be applied on two-dimensional and three-dimensional data together with several extensions and an incremental registration procedure. The algorithms are used to construct two different types of environment representations, memory-efficient sparse points and probabilistic reflection maps. For effective navigation in the robot’s workspace, different path planning algorithms are going to be presented for the two types of environment representations. Furthermore, two motion controllers will be described that allow a mobile robot to follow planned paths and to approach a target position and orientation. Finally this thesis will present different exploration and inspection strategies that use the aforementioned algorithms to move the robot to previously unexplored or uninspected terrain and update the internal environment representations accordingly. These strategies are augmented with algorithms for detecting changes in the environment and for segmenting internal models into individual rooms. The resulting system performed very successfully in the 2008 and 2009 RoboCup@Home competitions.
In this paper, residual sinks are used in bond graph model-based quantitative fault detection for the coupling of a model of a faultless process engineering system to a bond graph model of the faulty system. By this way, integral causality can be used as the preferred computational causality in both models. There is no need for numerical differentiation. Furthermore, unknown variables do not need to be eliminated from power continuity equations in order to obtain analytical redundancy relations (ARRs) in symbolic form. Residuals indicating faults are computed numerically as components of a descriptor vector of a differential algebraic equation system derived from the coupled bond graphs. The presented bond graph approach especially aims at models with non-linearities that make it cumbersome or even impossible to derive ARRs from model equations by elimination of unknown variables. For illustration, the approach is applied to a non-controlled as well as to a controlled hydraulic two-tank system. Finally, it is shown that not only the numerical computation of residuals but also the simultaneous numerical computation of their sensitivities with respect to a parameter can be supported by bond graph modelling.
This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.
A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
In a research project funded by the German Research Foundation, meteorologists, data publication experts, and computer scientists optimised the publication process of meteorological data and developed software that supports metadata review. The project group placed particular emphasis on scientific and technical quality assurance of primary data and metadata. At the end, the software automatically registers a Digital Object Identifier at DataCite. The software has been successfully integrated into the infrastructure of the World Data Center for Climate, but a key was to make the results applicable to data publication processes in other sciences as well.
A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object.
Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults.
The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.
The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.
The work presented in this paper focuses on the comparison of well-known and new fault-diagnosis algorithms in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well. Based on a study of literature covering fault-diagnosis algorithms, I selected four of these methods based on both linear and non-linear models, analysed and implemented them in a mathematical robot-model, representing a four-wheels-OMNI robot. In experiments I tested the ability of the algorithms to detect and identify abnormal behaviour and to optimize the model parameters for the given training data. The final goal was to point out the strengths of each algorithm and to figure out which method would best suit the demands of fault diagnosis for a particular robot.
We present our approach to extend a Virtual Reality software framework towards the use for Augmented Reality applications. Although VR and AR applications have very similar requirements in terms of abstract components (like 6DOF input, stereoscopic output, simulation engines), the requirements in terms of hardware and software vary considerably. In this article we would like to share the experience gained from adapting our VR software framework for AR applications. We will address design issues for this task. The result is a VR/AR basic software that allows us to implement interactive applications without fixing their type (VR or AR) beforehand. Switching from VR to AR is a matter of changing the configuration file of the application. We also give an example of the use of the extended framework: Augmenting the magnetic field of bar magnets in physics classes. We describe the setup of the system and the real-time calculation of the magnetic field, using a GPU.
For the case when the abstraction of instantaneous state transitions is adopted, this paper proposes to start fault detection and isolation in an engineering system from a single time-invariant causality bond graph representation of a hybrid model. To that end, the paper picks up on a long-known proposal to model switching devices by a transformer modulated by a Boolean variable and a resistor in fixed conductance causality accounting for its ON resistance. Bond graph representations of hybrid system models developed in this way have been used so far mainly for the purpose of simulation. The paper shows that they can well constitute an approach to the bond-graph-based quantitative fault detection and isolation of hybrid models. Advantages are that the standard sequential causality assignment procedure can be a used without modification. A single set of analytical redundancy relations valid for all physically feasible system modes can be (automatically) derived from the bond graph. Stiff model equations due to small values of the ON resistance in the switch model may be avoided by symbolic reformulation of equations and letting the ON resistance of some switches tend to zero, turning them into ideal switches.
First, for two examples considered in the literature, it is shown that the approach proposed in this paper can produce the same analytical redundancy relations as were obtained from a hybrid bond graph with controlled junctions and the use of a sequential causality assignment procedure especially for fault detection and isolation purpose. Moreover, the usefulness of the proposed approach is illustrated in two case studies by its application to standard switching circuits extensively used in power electronic systems and by simulation of some fault scenarios. The approach, however, is not confined to the fault detection and isolation of such systems. Analytically validated simulation results obtained by means of the program Scilab give confidence in the approach.
A bond graph representation of switching devices known for a long time has been a modulated transformer with a modulus b(t)∈{0,1}∀t≥0 in conjunction with a resistor R:Ron accounting for the ON-resistance of a switch considered non-ideal. Besides other representations, this simple model has been used in bond graphs for simulation of the dynamic behaviour of hybrid systems. A previous article of the author has proposed to use the transformer–resistor pair in bond graphs for fault diagnosis in hybrid systems. Advantages are a unique bond graph for all system modes, the application of the unmodified standard Sequential Causality Assignment Procedure, fixed computational causalities and the derivation of analytical redundancy relations incorporating ‘Boolean’ transformer moduli so that they hold for all system modes. Switches temporarily connect and disconnect model parts. As a result, some independent storage elements may temporarily become dependent, so that the number of state variables is not time-invariant. This article addresses this problem in the context of modelling and simulation of fault scenarios in hybrid systems. In order to keep time-invariant preferred integral causality at storage ports, residual sinks previously introduced by the author are used. When two storage elements become dependent at a switching time instance ts, a residual sink is activated. It enforces that the outputs of two dependent storage elements become immediately equal by imposing the conjugate3 power variable of appropriate value on their inputs. The approach is illustrated by the bond graph modelling and simulation of some fault scenarios in a standard three-phase switched power inverter supplying power into an RL-load in a delta configuration. A well-developed approach to model-based fault detection and isolation is to evaluate the residual of analytical redundancy relations. In this article, analytical redundancy relation residuals have been computed numerically by coupling a bond graph of the faulty system to one of the non-faulty systems by means of residual sinks. The presented approach is not confined to power electronic systems but can be used for hybrid systems in other domains as well. In further work, the RL-load may be replaced by a bond graph model of an alternating current motor in order to study the effect of switch failures in the power inverter on to the dynamic behaviour of the motor.
In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories.
Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.
Als Basis für Simulationen innerhalb virtueller Umgebungen werden meist unterliegende Semantiken benötigt. Im Fall von Verkehrssimulationen werden in der Regel definierte Verkehrsnetzwerke genutzt. Die Erstellung dieser Netzwerke wird meist per Hand durchgeführt, wodurch sie fehleranfällig ist und viel Zeit erfordert. Dieses Projekt wurde im Rahmen des AVeSi Projektes durchgeführt, in dem an der Entwicklung einer realistischen Verkehrssimulation für virtuelle Umgebung geforscht wird. Der im Projekt angestrebte Simulationsansatz basiert auf der Nutzung von zwei Komplexitätsebenen – einer mikroskopischen und einer mesoskopischen. Um einen Übergang zwischen den Simulationsebenen zu realisieren ist eine Verknüpfung der Verkehrsnetzwerke notwendig, was ebenfalls mit einem hohen Zeitaufwand verbunden ist. In diesem Bericht werden Modelle für Verkehrsnetzwerke beider Ebenen vorgestellt. Anschließend wird ein Ansatz beschrieben, der eine automatische Generierung und Verknüpfung von Verkehrsnetzwerken beider Modelle ermöglicht. Als Grundlage für die Generierung der Netzwerke dienen Daten im OpenDRIVE®-Format. Zur Evaluierung wurden wirklichkeitsgetreue OpenStreetMap-Daten, durch Verwendung einer Drittanbietersoftware, in OpenDRIVE®-Datensätze überführt. Es konnte nachgewiesen werden, dass es durch den Ansatz möglich ist, innerhalb weniger Minuten, große Verkehrsnetzwerke zu erzeugen, auf denen unmittelbar Simulationen ausgeführt werden können. Die Qualität der zur Evaluierung generierten Netzwerke reicht jedoch für Umgebungen, in denen ein hoher Realitätsgrad gefordert wird, nicht aus, was einen zusätzlichen Bearbeitungsschritt notwendig macht. Die Qualitätsprobleme konnten darauf zurückgeführt werden, dass der Detailgrad, der den Evaluierungsdaten zu Grunde liegenden OpenStreetMap-Daten, nicht hoch genug und der Überführungsprozess nicht ausreichend transparent ist.
In this paper we present the steps towards a well-designed concept of a 5VR6 system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of 5VR6 and 5AR6 systems in general to show why, in our opinion, 5VR6 systems are better suited for school-use.
Realism and plausibility of computer controlled entities in entertainment software have been enhanced by adding both static personalities and dynamic emotions. Here a generic model is introduced which allows the transfer of findings from real-life personality studies to a computational model. This information is used for decision making. The introduction of dynamic event-based emotions enables adaptive behavior patterns. The advantages of this new model have been validated with a four-way crossroad in a traffic simulation. Driving agents using the introduced model enhanced by dynamics were compared to agents based on static personality profiles and simple rule-based behavior. It has been shown that adding an adaptive dynamic factor to agents improves perceivable plausibility and realism. It also supports coping with extreme situations in a fair and understandable way.
The reciprocal translocation t(12;21)(p13;q22), the most common structural genomic alteration in B-cell precursor acute lymphoblastic leukaemia in children, results in a chimeric transcription factor TEL-AML1 (ETV6-RUNX1). We identified directly and indirectly regulated target genes utilizing an inducible TEL-AML1 system derived from the murine pro B-cell line BA/F3 and a monoclonal antibody directed against TEL-AML1. By integration of promoter binding identified with chromatin immunoprecipitation (ChIP)-on-chip, gene expression and protein output through microarray technology and stable labelling of amino acids in cell culture, we identified 217 directly and 118 indirectly regulated targets of the TEL-AML1 fusion protein. Directly, but not indirectly, regulated promoters were enriched in AML1-binding sites. The majority of promoter regions were specific for the fusion protein and not bound by native AML1 or TEL. Comparison with gene expression profiles from TEL-AML1-positive patients identified 56 concordantly misregulated genes with negative effects on proliferation and cellular transport mechanisms and positive effects on cellular migration, and stress responses including immunological responses. In summary, this work for the first time gives a comprehensive insight into how TEL-AML1 expression may directly and indirectly contribute to alter cells to become prone for leukemic transformation.
Generating and visualizing large areas of vegetation that look natural makes terrain surfaces much more realistic. However, this is a challenging field in computer graphics, because ecological systems are complex and visually appealing plant models are geometrically detailed. This work presents Silva (System for the Instantiation of Large Vegetated Areas), a system to generate and visualize large vegetated areas based on the ecological surrounding. Silva generates vegetation on Wang-tiles with associated reusable distributions enabling multi-level instantiation. This paper presents a method to generate Poisson Disc Distributions (PDDs) with variable radii on Wang-tile sets (without a global optimization) that is able to generate seamless tilings. Because Silva has a freely configurable generation pipeline and can consider plant neighborhoods it is able to incorporate arbitrary abiotic and biotic components during generation. Based on multi-levelinstancing and nested kd-trees, the distributions on the Wang-tiles allow their acceleration structures to be reused during visualization. This enables Silva to visualize large vegetated areas of several hundred square kilometers with low render times and a small memory footprint.
A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).
Business process infrastructures like BPMS (Business Process Management Systems) and WfMS (Workflow Management Systems) traditionally focus on the automation of processes predefined at design time. This approach is well suited for routine tasks which are processed repeatedly and which are described by a predefined control flow. In contrast, knowledge-intensive work is more goal and data-driven and less control-flow oriented. Knowledge workers need the flexibility to decide dynamically at run-time and based on current context information on the best next process step to achieve a given goal. Obviously, in most practical scenarios, these decisions are complex and cannot be anticipated and modeled completely in a predefined process model. Therefore, adaptive and dynamic process management techniques are necessary to augment the control-flow oriented part of process management (which is still a need also for knowledge workers) with flexible, context-dependent, goaloriented support.
The contribution of the most common reciprocal translocation in childhood B-cell precursor leukemia t(12;21)(p13;q22) to leukemia development is still under debate. Direct as well as secondary indirect effects of the TEL-AML1 fusion protein are commonly recorded by using cell lines and patient samples, often bearing the TEL-AML1 fusion protein for decades. To identify direct targets of the fusion protein a short-term induction of TEL-AML1 is needed. We here describe in detail the experimental procedure, quality controls and contents of the ChIP, mRNA expression and SILAC datasets associated with the study published by Linka and colleagues in the Blood Cancer Journal [1] utilizing a short term induction of TEL-AML1 in an inducible precursor B-cell line model.
We are happy to present you the special issue on Best Practice in Robot Software Development of the Journal on Software Engineering for Robotics! The spark for this special issue came during the eighth workshop on Software Development and Integration in Robotics (SDIR) at the 2013 IEEE International Conference on Robotics and Automation. The workshop focused on Robot Software Architectures, and the fruitful discussions made it clear that the design, development, and deployment of robot software is always an interplay between competing aspects. These are often couched in antagonistic pairs, such as dependability versus performance, and prominently include quality attributes as well as functional, nonfunctional, and application requirements.
The objective of this research project is to develop a user-friendly and cost-effective interactive input device that allows intuitive and efficient manipulation of 3D objects (6 DoF) in virtual reality (VR) visualization environments with flat projections walls. During this project, it was planned to develop an extended version of a laser pointer with multiple laser beams arranged in specific patterns. Using stationary cameras observing projections of these patterns from behind the screens, it is planned to develop an algorithm for reconstruction of the emitter’s absolute position and orientation in space. Laser pointer concept is an intuitive way of interaction that would provide user with a familiar, mobile and efficient navigation though a 3D environment. In order to navigate in a 3D world, it is required to know the absolute position (x, y and z position) and orientation (roll, pitch and yaw angles) of the device, a total of 6 degrees of freedom (DoF). Ordinary laser-based pointers when captured on a flat surface with a video camera system and then processed, will only provide x and y coordinates effectively reducing available input to 2 DoF only. In order to overcome this problem, an additional set of multiple (invisible) laser pointers should be used in the pointing device. These laser pointers should be arranged in a way that the projection of their rays will form one fixed dot pattern when intersected with the flat surface of projection screens. Images of such a pattern will be captured via a real-time camera-based system and then processed using mathematical re-projection algorithms. This would allow the reconstruction of the full absolute 3D pose (6 DoF) of the input device. Additionally, multi-user or collaborative work should be supported by the system, would allow several users to interact with a virtual environment at the same time. Possibilities to port processing algorithms into embedded processors or FPGAs will be investigated during this project as well.
Ziel des hier beschriebenen Forschungsprojekts war die Entwicklung eines prototypischen Fahrradfahrsimulators für den Einsatz in der Verkehrserziehung und im Verkehrssicherheitstraining. Der entwickelte Prototyp soll möglichst universell für verschiedene Altersklassen und Applikationen einsetzbar sowie mobil sein.
Might the gravity levels found on other planets and on the moon be sufficient to provide an adequate perception of upright for astronauts? Can the amount of gravity required be predicted from the physiological threshold for linear acceleration? The perception of upright is determined not only by gravity but also visual information when available and assumptions about the orientation of the body. Here, we used a human centrifuge to simulate gravity levels from zero to earth gravity along the long-axis of the body and measured observers' perception of upright using the Oriented Character Recognition Test (OCHART) with and without visual cues arranged to indicate a direction of gravity that differed from the body's long axis. This procedure allowed us to assess the relative contribution of the added gravity in determining the perceptual upright. Control experiments off the centrifuge allowed us to measure the relative contributions of normal gravity, vision, and body orientation for each participant. We found that the influence of 1 g in determining the perceptual upright did not depend on whether the acceleration was created by lying on the centrifuge or by normal gravity. The 50% threshold for centrifuge-simulated gravity's ability to influence the perceptual upright was at around 0.15 g, close to the level of moon gravity but much higher than the threshold for detecting linear acceleration along the long axis of the body. This observation may partially explain the instability of moonwalkers but is good news for future missions to Mars.
Design of a declarative language for task-oriented grasping and tool-use with dextrous robotic hands
(2014)
Apparently simple manipulation tasks for a human such as transportation or tool use are challenging to replicate in an autonomous service robot. Nevertheless, dextrous manipulation is an important aspect for a robot in many daily tasks. While it is possible to manufacture special-purpose hands for one specific task in industrial settings, a generalpurpose service robot in households must have flexible hands which can adapt to many tasks. Intelligently using tools enables the robot to perform tasks more efficiently and even beyond the designed capabilities. In this work a declarative domain-specific language, called Grasp Domain Definition Language (GDDL), is presented that allows the specification of grasp planning problems independently of a specific grasp planner. This design goal resembles the idea of the Planning Domain Definition Language (PDDL). The specification of GDDL requires a detailed analysis of the research in grasping in order to identify best practices in different domains that contribute to a grasp. These domains describe for instance physical as well as semantic properties of objects and hands. Grasping always has a purpose which is captured in the task domain definition. It enables the robot to grasp an object in a taskdependent manner. Suitable representations in these domains have to be identified and formalized for which a domain-driven software engineering approach is applied. This kind of modeling allows the specification of constraints which guide the composition of domain entity specifications. The domain-driven approach fosters reuse of domain concepts while the constraints enable the validation of models already during design time. A proof of concept implementation of GDDL into the GraspIt! grasp planner is developed. Preliminary results of this thesis have been published and presented on the IEEE International Conference on Robotics and Automation (ICRA).
Improving the study entry supports students in a decisive phase of their university education. Implementing improvements is a change process and can only be successful if the relevant stakeholders are addressed and convinced. In the described Teaching Quality Pact project evaluation data is used as a mean to discuss in the university the situation of the study programs. As these discussions were based on empirical data rather than on opinion, it was possible to achieve an open discussion about measures that are implemented. The open discussion is maintained during the project when results of the measures taken are analyzed.
Digitalisierung eines Pen-&-Paper-Rollenspiels mit Übertragung von Interaktionen in die reale Welt
(2015)
Das hier vorliegende Werk ist eine Zusammenführung des Masterprojekts und der darauf aufbauenden Masterarbeit von Antony Konstantinidis und Nicolas Kopp. Diese Arbeiten sind in den Jahren 2013 und 2014 entstanden und ergeben zusammen ein umfassendes Bild der Software- und Spielenentwicklung, der Konzeption von Echtzeitanwendungen und vermitteln Hintergründe aus den verschiedensten Bereichen der Mixed Reality, des Storytelling, der Netzwerkkonzeption und der künstlichen Intelligenz.
Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical.
One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms.
To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29].
We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD.
When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%.
We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.
Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90% of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment in those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, trials of moderate-intensity exercise (i.e. self-paced cycling) and no-exercise (i.e. automatic propulsion) were performed within three levels of virtual environment exposure. Each trial was 5-min in duration and was followed by post-trial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore these change indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Der Einsatz von Agentensystemen ist vielfältig, dennoch sind aktuelle Realisierungen lediglich in der Lage primär regelkonformes oder aber „geskriptetes“ Verhalten auch unter Einsatz von randomisierten Verfahren abzubilden. Für eine realistische Repräsentation sind jedoch auch Abweichungen von den Regeln notwendig, die nicht zufällig sondern kontextbedingt auftreten. Im Rahmen dieses Forschungsprojektes wurde ein realitätsnaher Straßenverkehrssimulator realisiert, der mittels eines detailliert definierten Systems für kognitive Agenten auch diese irregulären Verhaltensweisen generiert und somit ein realistisches Verkehrsverhalten für die Verwendung in VR-Anwendungen simuliert. Durch das Erweitern der Agenten mit psychologischen Persönlichkeitsprofilen, basierend auf dem „Fünf-Faktoren-Modell“, zeigen die Agenten individualisierte und gleichzeitig konsistente Verhaltensmuster. Ein dynamisches Emotionsmodell sorgt zusätzlich für eine situationsbedingte Adaption des Verhaltens, z.B. bei langen Wartezeiten. Da die detaillierte Simulation kognitiver Prozesse, der Persönlichkeitseinflüsse und der emotionalen Zustände erhebliche Rechenleistungen verlangt, wurde ein mehrschichtiger Simulationsansatz entwickelt, der es erlaubt den Detailgrad der Berechnung und Darstellung jedes Agenten während der Simulation stufenweise zu verändern, so dass alle im System befindlichen Agenten konsistent simuliert werden können. Im Rahmen diverser Evaluierungsiterationen in einer bestehenden VR-Anwendung – dem FIVIS-Fahrradfahrsimulator des Antragstellers - konnte eindrucksvoll nachgewiesen werden, dass die realisierten Konzepte die ursprünglich formulierten Forschungsfragestellung überzeugend und effizient lösen.
The steadily decreasing prices of display technologies and computer graphics hardware contribute to the increasing popularity of multiple-display environments, like large, high-resolution displays. It is therefore necessary that educational organizations give the new generation of computer scientists an opportunity to become familiar with this kind of technology. However, there is a lack of tools that allow for getting started easily. Existing frameworks and libraries that provide support for multi-display rendering are often complex in understanding, configuration and extension. This is critical especially in educational context where the time that students have for their projects is limited and quite short. These tools are also rather known and used in research communities only, thus providing less benefit for future non-scientists. In this work we present an extension for the Unity game engine. The extension allows – with a small overhead – for implementation of applications that are apt to run on both single-display and multi-display systems. It takes care of the most common issues in the context of distributed and multi-display rendering like frame, camera and animation synchronization, thus reducing and simplifying the first steps into the topic. In conjunction with Unity, which significantly simplifies the creation of different kinds of virtual environments, the extension affords students to build mock-up virtual reality applications for large, high-resolution displays, and to implement and evaluate new interaction techniques and metaphors and visualization concepts. Unity itself, in our experience, is very popular among computer graphics students and therefore familiar to most of them. It is also often employed in projects of both research institutions and commercial organizations; so learning it will provide students with qualification in high demand.
Rural areas often lack affordable broadband Internet connectivity, mainly due to the CAPEX and especially OPEX of traditional operator equipment [HEKN11]. This digital divide limits the access to knowledge, health care and other services for billions of people. Different approaches to close this gap were discussed in the last decade [SPNB08]. In most rural areas satellite bandwidth is expensive and cellular networks (3G,4G) as well as WiMAX suffer from the usually low population density making it hard to amortize the costs of a base station [SPNB08].
An Empirical Evaluation of the Received Signal Strength Indicator for fixed outdoor 802.11 links
(2015)
For the evaluation of the received signal strength indication (RSSI) a different methodology compared to previous publications is introduced in this paper by exploiting a spectral scan feature of recent Qualcomm Atheros WiFi NICs. This method is compared to driver reports and to an industrial grade spectrum analyzer. During the conducted outdoor experiments a decreased scattering of the RSSI compared to previous publications is observed. By applying well-known mathematical tests for normality it is possible to show that the RSSI does not follow a normal distribution in a line-of-sight outdoor environment. The evaluated spectral scan features offers additional possibilities to develop interference classifiers which is an important step for frequency allocation in long-distance 802.11 networks.
TinyECC 2.0 is an open source library for Elliptic Curve Cryptography (ECC) in wireless sensor networks. This paper analyzes the side channel susceptibility of TinyECC 2.0 on a LOTUS sensor node platform. In our work we measured the electromagnetic (EM) emanation during computation of the scalar multiplication using 56 different configurations of TinyECC 2.0. All of them were found to be vulnerable, but to a different degree. The different degrees of leakage include adversary success using (i) Simple EM Analysis (SEMA) with a single measurement, (ii) SEMA using averaging, and (iii) Multiple-Exponent Single-Data (MESD) with a single measurement of the secret scalar. It is extremely critical that in 30 TinyECC 2.0 configurations a single EM measurement of an ECC private key operation is sufficient to simply read out the secret scalar. MESD requires additional adversary capabilities and it affects all TinyECC 2.0 configurations, again with only a single measurement of the ECC private key operation. These findings give evidence that in security applications a configuration of TinyECC 2.0 should be chosen that withstands SEMA with a single measurement and, beyond that, an addition of appropriate randomizing countermeasures is necessary.
The development of advanced robotic systems is challenging as expertise from multiple domains needs to be integrated conceptually and technically. Model-driven engineering promises an efficient and flexible approach for developing robotics applications that copes with this challenge. Domain-specific modeling allows to describe robotics concerns with concepts and notations closer to the respective problem domain. This raises the level of abstraction and results in models that are easier to understand and validate. Furthermore, model-driven engineering allows to increase the level of automation, e.g. through code generation, and to bridge the gap between modeling and implementation. The anticipated results are improved efficiency and quality of the robotics systems engineering process. Within this contribution, we survey the available literature on domain-specific modeling and languages that target core robotics concerns. In total 137 publications were identified that comply with a set of defined criteria, which we consider essential for contributions in this field. With the presented survey, we provide an overview on the state-of-the-art of domain-specific modeling approaches in robotics. The surveyed publications are investigated from the perspective of users and developers of model-based approaches in robotics along a set of quantitative and qualitative research questions. The presented quantitative analysis clearly indicates the rising popularity of applying domain-specific modeling approaches to robotics in the academic community. Beyond this statistical analysis, we map the selected publications to a defined set of robotics subdomains and typical development phases in robotic systems engineering as reference for potential users. Furthermore, we analyze these contributions from a language engineering viewpoint and discuss aspects such as the methods and tools used for their implementation as well as their documentation status, platform integration, typical use cases and the evaluation strategies used for validation of the proposed approaches. Finally, we conclude with recommendations for discussion in the model-driven engineering and robotics community based on the insights gained in this survey.
Das Optimalziel für ein Logistiklager ist eine hohe Auslastung des Transportsystems. Es stellt sich somit die Frage nach der Auswahl der Aufträge, die gleichzeitig innerhalb des Lagers abgearbeitet werden, ohne Staus, Blockaden oder Überlastungen entstehen zu lassen. Dieser Auswahlprozess wird auch als Path-Packing bezeichnet. Diese Masterthesis untersucht das Path-Packing auf graphentheoretischer Ebene und stellt verschiedene Greedy-Heuristiken, eine Optimallösung auf Basis der Linearen Programmierung sowie einen kombinierten Ansatz gegenüber. Die Ansätze werden anhand von Messzeiten und Auslastungen unterschiedlich randomisiert erstellter Testdaten ausgewertet.
Das Cutting sticks-Problem ist ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Es werden grundlegende Definitionen für die Behandlung sowie bisherige Ansätze zur Lösung des Problems aufgearbeitet und durch einige neue Aussagen ergänzt. Insbesondere stehen Ideen für eine algorithmische Lösung des Problems bzw. von Varianten des Problems im Fokus.
WiFi-based Long Distance (WiLD) networks have emerged as a promising alternative approach for Internet in rural areas. However, the MAC layer, which is based on the IEEE802.11 standard, comprises contiguous stations in a cell and is spatially restricted to a few hundred meters at most. In this work, we summarize efforts by different researchers to use IEEE802.11 over long-distances. In addition, we introduce WiLDToken, our solution to optimizing the throughput and fairness and reducing the delay on WiLD links. Compared to previous alternative MAC layers protocols for WiLD, our focus is on optimizing a single link in a multi-radio multi-channel mesh. We implement our protocol in the ns-3 network simulator and show thatWiLDToken is superior to an adapted version of the Distributed Coordination Function (DCF) for different link distances. We find that the throughput on a single link is close to the physical data-rate without a major decrease over longer distances.
WiFi-based Long Distance (WiLD) networks have emerged as a promising alternative technology approach for providing Internet in rural areas. An important factor in network planning of these wireless networks is estimating the path loss. In this work, we present various propagation models we found suitable for point-to-point (P2P) operation in the WiFi frequency bands. We conducted outdoor experiments with commercial offthe- shelf (COTS) hardware in our testbed made of 7 different long-distance links ranging from 450 m to 10.3 km and a mobile measurement station. We found that for short links with omni-directional antennas ground-reflection is a measurable phenomenon. For longer links, we show that either FSPL or the Longley-Rice model provides accurate results for certain links. We conclude that a good site survey is needed to exclude influences not included in the propagation models.
Design of an Active Multispectral SWIR Camera System for Skin Detection and Face Verification
(2016)
Biometric face recognition is becoming more frequently used in different application scenarios. However, spoofing attacks with facial disguises are still a serious problem for state of the art face recognition algorithms. This work proposes an approach to face verification based on spectral signatures of material surfaces in the short wave infrared (SWIR) range. They allow distinguishing authentic human skin reliably from other materials, independent of the skin type. We present the design of an active SWIR imaging system that acquires four-band multispectral image stacks in real-time. The system uses pulsed small band illumination, which allows for fast image acquisition and high spectral resolution and renders it widely independent of ambient light. After extracting the spectral signatures from the acquired images, detected faces can be verified or rejected by classifying the material as "skin" or "no-skin". The approach is extensively evaluated with respect to both acquisition and classification performance. In addition, we present a database containing RGB and multispectral SWIR face images, as well as spectrometer measurements of a variety of subjects, which is used to evaluate our approach and will be made available to the research community by the time this work is published.
Dieser Beitrag betrachtet den Stand der Entwicklung bei der Vernetzung von Fahrzeugen aus Sicht der IT-Sicherheit. Etablierte Kommunikationssysteme und Verkehrstelematikanwendungen im Automobil werden ebenso vorgestellt und diskutiert wie auch zukünftige Kommunikationstechnologien Car-2-Car und Car-2-X. IT-Sicherheit im Automobil ist ein schwieriges Feld, da es hier um eine Integration von neuen innovativen Anwendungen in eine hochkomplexe bestehende Fahrzeugarchitektur geht, die zu keinen neuen Gefährdungen für die Fahrzeuginsassen führen darf. Zudem bleibt die Funktionsweise dieser Anwendungen mit ihren Auswirkungen auf das informationelle Selbstbestimmungsrecht oft intransparent. Die abschließende Diskussion gibt Handlungsempfehlungen aus Sicht der Verbraucher.
Studienverläufe von Studenten weichen nicht selten vom offiziell geplanten Curriculum ab. Für eine den Studienerfolg verbessernde Planung und Weiterentwicklung von Studiengängen und Curricula fehlen den Verantwortlichen häufig Erkenntnisse über tatsächliche sowie typischerweise erfolgreiche und weniger erfolgreiche Studienverlaufsmuster. Process-Mining-Techniken können helfen, mehr Transparenz bei der Auswertung von Studienverläufen zu schaffen und so die Erkennung typischer Studienverlaufsmuster, die Überprüfung der Übereinstimmung der konkreten Studienverläufe mit dem vorgegebenen Curriculum sowie eine zielgerechte Verbesserung des Curriculums zu unterstützen.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
Service robots performing complex tasks involving people in houses or public environments are becoming more and more common, and there is a huge interest from both the research and the industrial point of view. The RoCKIn@Home challenge has been designed to compare and evaluate different approaches and solutions to tasks related to the development of domestic and service robots. RoCKIn@Home competitions have been designed and executed according to the benchmarking methodology developed during the project and received very positive feedbacks from the participating teams. Tasks and functionality benchmarks are explained in detail.
Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly.
Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups.
In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness.
The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction.
Population ageing and growing prevalence of disability have resulted in a growing need for personal care and assistance. The insufficient supply of personal care workers and the rising costs of long-term care have turned this phenomenon into a greater social concern. This has resulted in a growing interest in assistive technology in general, and assistive robots in particular, as a means of substituting or supplementing the care provided by humans, and as a means of increasing the independence and overall quality of life of persons with special needs. Although many assistive robots have been developed in research labs world-wide, very few are commercially available. One of the reasons for this, is the cost. One way of optimising cost is to develop solutions that address specific needs of users. As a precursor to this, it is important to identify gaps between what the users need and what the technology (assistive robots) currently provides. This information is obtained through technology mapping.
The current literature lacks a mapping between user needs and assistive robots, at the level of individual systems. The user needs are not expressed in uniform terminology across studies, which makes comparison of results difficult. In this research work, we have illustrated the technology mapping of assistive robots using the International Classification of Functioning, Disability and Health (ICF). ICF provides standard terminology for expressing user needs in detail. Expressing the assistive functions of robots also in ICF terminology facilitates communication between different stakeholders (rehabilitation professionals, robotics researchers, etc.).
We also investigated existing taxonomies for assistive robots. It was observed that there is no widely accepted taxonomy for classifying assistive robots. However, there exists an international standard, ISO 9999, which classifies commercially available assistive products. The applicability of the latest revision of ISO 9999 standard for classifying mobility assistance robots has been studied. A partial classification of assistive robots based on ISO 9999 is suggested. The taxonomy and technology mapping are illustrated with the help of four robots that have the potential to provide mobility assistance. These are the SmartCane, the SmartWalker, MAid and Care-O-bot (R) 3. SmartCane, SmartWalker and MAid provide assistance by supporting physical movement. Care-O-bot (R) 3 provides assistance by reducing the need to move.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden Ansätze aufgezeigt, mit denen bestimmte Instanzen des Problems effizient berechnet werden können. Für die Berechnung wichtige Parameter werden charakterisiert und deren Beziehung untereinander analysiert.
This paper describes the security mechanisms of several wireless building automation technologies, namely ZigBee, EnOcean, ZWave, KNX, FS20, and Home-Matic. It is shown that none of the technologies provides the necessary measure ofsecurity that should be expected in building automation systems. One of the conclusions drawn is that software embedded in systems that are build for a lifetime of twenty years or more needs to be updatable.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden für eine Reihe von Instanzen effiziente Lösungen angegeben.
Recent work in image captioning and scene-segmentation has shown significant results in the context of scene-understanding. However, most of these developments have not been extrapolated to research areas such as robotics. In this work we review the current state-ofthe- art models, datasets and metrics in image captioning and scenesegmentation. We introduce an anomaly detection dataset for the purpose of robotic applications, and we present a deep learning architecture that describes and classifies anomalous situations. We report a METEOR score of 16.2 and a classification accuracy of 97 %.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
The combination of Software-Defined Networking (SDN) and Wireless Mesh Network (WMN) is challenging due to the different natures of both concepts. SDN describes networks with homogeneous, static and centralized controlled topologies. In contrast, a WMN is characterized by a dynamic and distributed network control, and adds new challenges with respect to time-critical operation. However, SDN and WMN are both associated with decreasing the operational costs for communication networks which is especially beneficial for internet provisioning in rural areas. This work surveys the current status for Software-Defined Wireless Mesh Networking. Besides a general overview in the domain of wireless SDN, this work focuses especially on different identified aspects: representing and controlling wireless interfaces, control-plane connection and topology discovery, modulation and coding, routing and load-balancing and client handling. A complete overview of surveyed solutions, open issues and new research directions is provided with regard to each aspect.
Real-World Performance of current Mesh Protocols in a small-scale Dual-Radio Multi-Link Environment
(2017)
Two key questions motivated the work in this paper: What is the impact of different usage schemes for multiple channels in a dual-radio Wireless Mesh Network (WMN), and what is the impact of some popular WMN routing protocols on its performance. These two questions were evaluated in a small and simple real-world scenario. A major concern was reproducibility of the results. We show that it is beneficial to use both radios on different frequencies in a fully meshed environment with four routers. The routing protocols Babel, B.A.T.M.A.N. V, BMX7 and OLSRv2 recognize a saturated channel and prefer the other one. We show that in our scenario all of the protocols perform equally well since the protocol overhead is comparably low not influencing the overall performance of the network.
WiFi-based Long Distance (WiLD) networks have emerged as a promising alternative approach for Internet in rural areas. The main hardware components of these networks are commercial off-the-shelf WiFi radios and directional antennas. During our experiences with real-world WiLD networks, we encountered that interference among long-distance links is a major issue even with high gain directional antennas. In this work, we are providing an in-depth analysis of these interference effects by conducting simulations in ns-3. To closely match the real-world interference effects, we implemented a module to load radiation pattern of commonly used antennas. We analyze two different interference scenarios typically present as a part of larger networks. The results show that side-lobes of directional antennas significantly influence the throughput of long-distance WiFi links depending on the orientation. This work emphasizes that the usage of simple directional antenna models needs to be considered carefully.
This paper introduces a random number generator (RNG) based on the avalanche noise of two diodes. A true random number generator (TRNG) generates true random numbers with the use of the electronic noise produced by two avalanche diodes. The amplified outputs of the diodes are sampled and digitized. The difference between the two concurrently sampled and digitized outputs is calculated and used to select a seed and to drive a pseudo-random number generator (PRNG). The PRNG is an xorshift generator that generates 1024 bits in each cycle. Every sequence of 1024 bits is moderately modified and output. The TRNG delivers the next seed and the next cycle begins. The statistical behavior of the generator is analyzed and presented.
This work discusses how to use OSM for robotic applications and aims at starting a discussion between the OSM and the robotics community. OSM contains much topological and semantic information that can be directly used in robotics and offers various advantages: 1) Standardized format with existing tooling. 2) The graph structure allows to compose the OSM models with domain-specific semantics by adding custom nodes, relations, and key-value pairs. 3) Information about many places is already available and can be used by robots since it is driven by a community effort.
Friction effects impose a requirement for the supplementary amount of torque to be produced in actuators for a robot to move, which in turn increases energy consumption. We cannot eliminate friction, but we can optimize motions to make them more energy efficient, by considering friction effects in motion computations. Optimizing motions means computing efficient joint torques/accelerations based on different friction torques imposed in each joint. Existing friction forces can be used for supporting certain types of arm motions, e.g standing still.
Reducing energy consumption of robot's arms will provide many benefits, such as longer battery life of mobile robots, reducing heat in motor systems, etc.
The aim of this project is extending an already available constrained hybrid dynamic solver, by including static friction effects in the computations of energy optimal motions. When the algorithm is extended to account for static friction factors, a convex optimization (maximization) problem must be solved.
The author of this hybrid dynamic solver has briefly outlined the approach for including static friction forces in computations of motions, but without providing a detailed derivation of the approach and elaboration that will show its correctness. Additionally, the author has outlined the idea for improving the computational efficiency of the approach, but without providing its derivation.
In this project, the proposed approach for extending the originally formulated algorithm has been completely derived and evaluated in order to show its feasibility. The evaluation is conducted in simulation environment with one DOF robot arm, and it shows correct results from the computation of motions. Furthermore, this project presents the derivation of the outlined method for improving the computational efficiency of the extended solver.
Maintaining orientation in an environment with non-Earth gravity (1 g) is critical for an astronaut's operational performance. Such environments present a number of complexities for balance and motion. For example, when an astronaut tilts due to ascending or descending an inclined plane on the moon, the gravity vector will be tilted correctly, but the magnitude will be different from on earth. If this results in a mis-perceived tilt, then that may lead to postural and perceptual errors, such as mis-perceiving the orientation of oneself or the ground plane and corresponding errors in task judgment.
Motion capture, often abbreviated mocap, generally aims at recording any kind of motion -- be it from a person or an object -- and to transform it to a computer-readable format. Especially the data recorded from (professional and non-professional) human actors are typically used for analysis in e.g. medicine, sport sciences, or biomechanics for evaluation of human motion across various factors. Motion capture is also widely used in the entertainment industry: In video games and films realistic motion sequences and animations are generated through data-driven motion synthesis based on recorded motion (capture) data.
Although the amount of publicly available full-body-motion capture data is growing, the research community still lacks a comparable corpus of specialty motion data such as, e.g. prehensile movements for everyday actions. On the one hand, such data can be used to enrich (hand-over animation) full-body motion capture data - usually captured without hand motion data due to the drastic dimensional difference in articulation detail. On the other hand, it provides means to classify and analyse prehensile movements with or without respect to the concrete object manipulated and to transfer the acquired knowledge to other fields of research (e.g. from 'pure' motion analysis to robotics or biomechanics).
Therefore, the objective of this motion capture database is to provide well-documented, free motion capture data for research purposes.
The presented database GraspDB14 in sum contains over 2000 prehensile movements of ten different non-professional actors interacting with 15 different objects. Each grasp was realised five times by each actor. The motions are systematically named containing an (anonymous) identifier for each actor as well as one for the object grasped or interacted with.
The data were recorded as joint angles (and raw 8-bit sensor data) which can be transformed into positional 3D data (3D trajectories of each joint).
In this document, we provide a detailed description on the GraspDB14-database as well as on its creation (for reproducibility).
Chapter 2 gives a brief overview of motion capture techniques, freely available motion capture databases for both, full body motions and hand motions, and a short section on how such data is made useful and re-used. Chapter 3 describes the database recording process and details the recording setup and the recorded scenarios. It includes a list of objects and performed types of interaction. Chapter 4 covers used file formats, contents, and naming patterns. We provide various tools for parsing, conversion, and visualisation of the recorded motion sequences and document their usage in chapter 5.
Bei der Übertragung und Speicherung von Daten ist es eine wesentliche Frage, inwieweit die Daten komprimiert werden können, ohne dass deren Informationsgehalt verloren geht.
Ein Maß für den Informationsgehalt von Daten ist also von grundlegender Bedeutung. Vor etwa siebzig Jahren hat C. E. Shannon ein solches Maß eingeführt und damit das Lehr- und Forschungsgebiet der Informationstheorie begründet, welches seit dem bis heute hin wesentlich zur Konzeption und Realisierung von Informationsund Kommunikationstechnologien beigetragen hat. Etwa zwanzig Jahre später hat A. N. Kolmogorov ein anderes Maß für den Informationsgehalt von Daten eingeführt. Während die Shannonsche Informationstheorie zum Curriculum von mathematischen, informatischen und elektrotechnischen Studiengängen gehört, ist die Algorithmische Informationstheorie von Kolmogorov weit weniger bekannt und eher Gegenstand von speziellen Lehrveranstaltungen.
Seit einigen Jahren nimmt allerdings die Beschäftigung mit dieser Theorie zu, zumal in der einschlägigen Literatur von erfolgreichen praktischen Anwendungen der Theorie berichtet wird. Die vorliegende Arbeit gibt eine Einführung in grundlegende Ideen dieser Theorie und beschreibt deren Anwendungsmöglichkeiten bei einigen ausgewählten Problemstellungen der Theoretischen Informatik.
Die Ausarbeitung kann als Skript für einführende Lehrveranstaltungen in die Algorithmische Informationstheorie sowie als Lektüre zur Einarbeitung in die Thematik als Ausgangspunkt für Forschungs- und Entwicklungsarbeiten verwendet werden.
The use of wearable devices or “wearables” in the physical activity domain has been increasing in the last years. These devices are used as training tools providing the user with detailed information about individual physiological responses and feedback to the physical training process. Advantages in sensor technology, miniaturization, energy consumption and processing power increased the usability of these wearables. Furthermore, available sensor technologies must be reliable, valid, and usable. Considering the variety of the existing sensors not all of them are suitable to be integrated in wearables. The application and development of wearables has to consider the characteristics of the physical training process to improve the effectiveness and efficiency as training tools. During physical training, it is essential to elicit individual optimal strain to evoke the desired adjustments to training. One important goal is to neither overstrain nor under challenge the user. Many wearables use heart rate as indicator for this individual strain. However, due to a variety of internal and external influencing factors, heart rate kinetics are highly variable making it difficult to control the stress eliciting individually optimal strain. For optimal training control it is essential to model and predict individual responses and adapt the external stress if necessary. Basis for this modeling is the valid and reliable recording of these individual responses. Depending on the heart rate kinetics and the obtained physiological data, different models and techniques are available that can be used for strain or training control. Aim of this review is to give an overview of measurement, prediction, and control of individual heart rate responses. Therefore, available sensor technologies measuring the individual heart rate responses are analyzed and approaches to model and predict these individual responses discussed. Additionally, the feasibility for wearables is analyzed.
Science Track FrOSCon 2016
(2018)
Im Jahre 2015 feierte die Free and Open Source Software Conference ihr 10 Jähriges Bestehen. Entstanden aus einer Idee von Studierenden, wissenschaftlichen Mitarbeitern und Professoren des Fachbereichs Informatik entwickelte sich eine der wichtigsten Konferenzen im Bereich der freien und quelloffenen Software in Deutschland.
More and more low-power wide-area networks (LPWANs) are being deployed and planning the gateway locations plays a significant role for the network range, performance and profitability. We choose LoRa as one LPWAN technology and evaluated the accuracy of the Received Signal Strength Indication (RSSI) of different chipsets in a laboratory environment. The results show the chipsets report significantly different RSSI. To estimate the range of a LPWAN beforehand, path loss models have been proposed. Compared to previous work, we evaluated the Longley-Rice Irregular Terrain Model which makes use of real-world elevation data to predict the path loss. To verify the results of that prediction, an extensive measurements campaign in a semi-urban area in Germany has been conducted. The results show that terrain data can increase the prediction accuracy.
Quantifying the spectrum occupancy in an outdoor 5 GHz WiFi network with directional antennas
(2018)
WiFi-based Long Distance networks are seen as a promising alternative for bringing broadband connectivity to rural areas. A key factor for the profitability of these networks is using license free bands. This work quantifies the current spectrum occupancy in our testbed, which covers rural and urban areas alike. The data mining is conducted on the same WiFi card and in parallel with an operational network. The presented evaluations reveal tendencies for various aspects: occupancy compared to population density, occupancy fluctuations, (joint)-vacant channels, the mean channel vacant duration, different approaches to model/forecast occupancy, and correlations among related interfaces.
The elucidation of conformations and relative potential energies (rPEs) of small molecules has a long history across a diverse range of fields. Periodically, it is helpful to revisit what conformations have been investigated and to provide a consistent theoretical framework for which clear comparisons can be made. In this paper, we compute the minima, first- and second-order saddle points, and torsion-coupled surfaces for methanol, ethanol, propan-2-ol, and propanol using consistent high-level MP2 and CCSD(T) methods. While for certain molecules more rigorous methods were employed, the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pV5Z theory level was used throughout to provide relative energies of all minima and first-order saddle points. The rPE surfaces were uniformly computed at the CCSD(T)/aug-cc-pVTZ//MP2/aug-cc-pVTZ level. To the best of our knowledge, this represents the most extensive study for alcohols of this kind, revealing some new aspects. Especially for propanol, we report several new conformations that were previously not investigated. Moreover, two metrics are included in our analysis that quantify how the selected surfaces are similar to one another and hence improve our understanding of the relationship between these alcohols.
Serine/threonine kinase 4 (STK4) deficiency is an autosomal recessive genetic condition that leads to primary immunodeficiency (PID) typically characterized by lymphopenia, recurrent infections and Epstein Barr Virus (EBV) induced lymphoproliferation and -lymphoma. State-of-the-art treatment regimens consist of prevention or treatment of infections, immunoglobulin substitution (IVIG) and restoration of the immune system by hematopoietic stem cell transplantation. Here, we report on two patients from two consanguineous families of Turkish (patient P1) and Moroccan (patient P2) decent, with PID due to homozygous STK4 mutations. P1 harbored a previously reported frameshift (c.1103 delT, p.M368RfsX2) and P2 a novel splice donor site mutation (P2; c.525+2 T>G). Both patients presented in childhood with recurrent infections, CD4 lymphopenia and dysregulated immunoglobulin levels. Patient P1 developed a highly malignant B cell lymphoma at the age of 10 years and a second, independent Hodgkin lymphoma 5 years later. To our knowledge she is the first STK4 deficient case reported who developed lymphoma in the absence of detectable EBV or other common viruses. Lymphoma development may be due to the lacking tumor suppressive function of STK4 or the perturbed immune surveillance due to the lack of CD4+ T cells. Our data should raise physicians' awareness of [1] lymphoma proneness of STK4 deficient patients even in the absence of EBV infection and [2] possibly underlying STK4 deficiency in pediatric patients with a history of recurrent infections, CD4 lymphopenia and lymphoma and unknown genetic make-up. Patient P2 experienced recurrent otitis in childhood, but when she presented at the age of 14, she showed clinical and immunological characteristics similar to patients suffering from Autoimmune Lymphoproliferative Syndrome (ALPS): elevated DNT cell number, non-malignant lymphadenopathy and hepatosplenomegaly, hematolytic anemia, hypergammaglobulinemia. Also patient P1 presented with ALPS-like features (lymphadenopathy, elevated DNT cell number and increased Vitamin B12 levels) and both were initially clinically diagnosed as ALPS-like. Closer examination of P2, however, revealed active EBV infection and genetic testing identified a novel STK4 mutation. None of the patients harbored typically ALPS-associated mutations of the Fas receptor mediated apoptotic pathway and Fas-mediated apoptosis was not affected. The presented case reports extend the clinical spectrum of STK4 deficiency.
Females are influenced more than males by visual cues during many spatial orientation tasks; but females rely more heavily on gravitational cues during visual-vestibular conflict. Are there gender biases in the relative contributions of vision, gravity and the internal representation of the body to the perception of upright? And might any such biases be affected by low gravity? 16 participants (8 female) viewed a highly polarized visual scene tilted ±112° while lying supine on the European Space Agency's short-arm human centrifuge. The centrifuge was rotated to simulate 24 logarithmically spaced g-levels along the long axis of the body (0.04-0.5g at ear-level). The perception of upright was measured using the Oriented Character Recognition Test (OCHART). OCHART uses the ambiguous symbol "p" shown in different orientations. Participants decided whether it was a "p" or a "d" from which the perceptual upright (PU) can be calculated for each visual/gravity combination. The relative contribution of vision, gravity and the internal representation of the body were then calculated. Experiments were repeated while upright. The relative contribution of vision on the PU was less in females compared to males (t=-18.48, p≤0.01). Females placed more emphasis on the gravity cue instead (f:28.4%, m:24.9%) while body weightings were constant (f:63.0%, m:63.2%). When upright (1g) in this and other studies (e.g., Barnett-Cowan et al. 2010, EJN, 31,1899) females placed more emphasis on vision in this task than males. The reduction in weight allocated by females to vision when in simulated low-gravity conditions compared to when upright under normal gravity may be related to similar female behaviour in response to other instances of visual-vestibular conflict. Why this is the case and at which point the perceptual change happens requires further research.
Analytical redundancy relations are fundamental in model-based fault detection and isolation. Their numerical evaluation yields a residual that may serve as a fault indicator. Considering switching linear time-invariant system models that use ideal switches, it is shown that analytical redundancy relations can be systematically deduced from a diagnostic bond graph with fixed causalities that hold for all modes of operation. Moreover, as to a faultless system, the presented bond graph–based approach enables to deduce a unique implicit state equation with coefficients that are functions of the discrete switch states. Devices or phenomena with fast state transitions, for example, electronic diodes and transistors, clutches, or hard mechanical stops are often represented by ideal switches which give rise to variable causalities. However, in the presented approach, fixed causalities are assigned only once to a diagnostic bond graph. That is, causal strokes at switch ports in the diagnostic bond graph reflect only the switch-state configuration in a specific system mode. The actual discrete switch states are implicitly taken into account by the discrete values of the switch moduli. The presented approach starts from a diagnostic bond graph with fixed causalities and from a partitioning of the bond graph junction structure and systematically deduces a set of equations that determines the wanted residuals. Elimination steps result in analytical redundancy relations in which the states of the storage elements and the outputs of the ideal switches are unknowns. For the later two unknowns, the approach produces an implicit differential algebraic equations system. For illustration of the general matrix-based approach, an electromechanical system and two small electronic circuits are considered. Their equations are directly derived from a diagnostic bond graph by following causal paths and are reformulated so that they conform with the matrix equations obtained by the formal approach based on a partitioning of the bond graph junction structure. For one of the three mode-switching examples, a fault scenario has been simulated.
Multi-robot systems (MRS) are capable of performing a set of tasks by dividing them among the robots in the fleet. One of the challenges of working with multirobot systems is deciding which robot should execute each task. Multi-robot task allocation (MRTA) algorithms address this problem by explicitly assigning tasks to robots with the goal of maximizing the overall performance of the system. The indoor transportation of goods is a practical application of multi-robot systems in the area of logistics. The ROPOD project works on developing multi-robot system solutions for logistics in hospital facilities. The correct selection of an MRTA algorithm is crucial for enhancing transportation tasks. Several multi-robot task allocation algorithms exist in the literature, but just few experimental comparative analysis have been performed. This project analyzes and assesses the performance of MRTA algorithms for allocating supply cart transportation tasks to a fleet of robots. We conducted a qualitative analysis of MRTA algorithms, selected the most suitable ones based on the ROPOD requirements, implemented four of them (MURDOCH, SSI, TeSSI, and TeSSIduo), and evaluated the quality of their allocations using a common experimental setup and 10 experiments. Our experiments include off-line and semi on-line allocation of tasks as well as scalability tests and use virtual robots implemented as Docker containers. This design should facilitate deployment of the system on the physical robots. Our experiments conclude that TeSSI and TeSSIduo suit best the ROPOD requirements. Both use temporal constraints to build task schedules and run in polynomial time, which allow them to scale well with the number of tasks and robots. TeSSI distributes the tasks among more robots in the fleet, while TeSSIduo tends to use a lower percentage of the available robots.
Subsequently, we have integrated TeSSI and TeSSIduo to perform multi-robot task allocation for the ROPOD project.
Currently, a variety of methods exist for creating different types of spatio-temporal world models. Despite the numerous methods for this type of modeling, there exists no methodology for comparing the different approaches or their suitability for a given application e.g. logistics robots. In order to establish a means for comparing and selecting the best-fitting spatio-temporal world modeling technique, a methodology and standard set of criteria must be established. To that end, state-of-the-art methods for this type of modeling will be collected, listed, and described. Existing methods used for evaluation will also be collected where possible.
Using the collected methods, new criteria and techniques will be devised to enable the comparison of various methods in a qualitative manner. Experiments will be proposed to further narrow and ultimately select a spatio-temporal model for a given purpose. An example network of autonomous logistic robots, ROPOD, will serve as a case study used to demonstrate the use of the new criteria. This will also serve to guide the design of future experiments that aim to select a spatio-temporal world modeling technique for a given task. ROPOD was specifically selected as it operates in a real-world, human shared environment. This type of environment is desirable for experiments as it provides a unique combination of common and novel problems that arise when selecting an appropriate spatio-temporal world model. Using the developed criteria, a qualitative analysis will be applied to the selected methods to remove unfit options.
Then, experiments will be run on the remaining methods to provide comparative benchmarks. Finally, the results will be analyzed and recommendations to ROPOD will be made.
Background: Virtual reality combined with spherical treadmills is used across species for studying neural circuits underlying navigation.
New Method: We developed an optical flow-based method for tracking treadmil ball motion in real-time using a single high-resolution camera.
Results: Tracking accuracy and timing were determined using calibration data. Ball tracking was performed at 500 Hz and integrated with an open source game engine for virtual reality projection. The projection was updated at 120 Hz with a latency with respect to ball motion of 30 ± 8 ms.
Comparison: with Existing Method(s) Optical flow based tracking of treadmill motion is typically achieved using optical mice. The camera-based optical flow tracking system developed here is based on off-the-shelf components and offers control over the image acquisition and processing parameters. This results in flexibility with respect to tracking conditions – such as ball surface texture, lighting conditions, or ball size – as well as camera alignment and calibration.
Conclusions: A fast system for rotational ball motion tracking suitable for virtual reality animal behavior across different scales was developed and characterized.
The application of Raman and infrared (IR) microspectroscopy is leading to hyperspectral data containing complementary information concerning the molecular composition of a sample. The classification of hyperspectral data from the individual spectroscopic approaches is already state-of-the-art in several fields of research. However, more complex structured samples and difficult measuring conditions might affect the accuracy of classification results negatively and could make a successful classification of the sample components challenging. This contribution presents a comprehensive comparison in supervised pixel classification of hyperspectral microscopic images, proving that a combined approach of Raman and IR microspectroscopy has a high potential to improve classification rates by a meaningful extension of the feature space. It shows that the complementary information in spatially co-registered hyperspectral images of polymer samples can be accessed using different feature extraction methods and, once fused on the feature-level, is in general more accurately classifiable in a pattern recognition task than the corresponding classification results for data derived from the individual spectroscopic approaches.
Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting - in terms of visualization and interaction - for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior.
In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display.
We observed 12 groups for an average of two hours each and gathered qualitative data and quantitative data. During data analysis, we focused specifically on participants' collaborative coupling and territorial behavior.
We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work.
In an effort to assist researchers in choosing basis sets for quantum mechanical modeling of molecules (i.e. balancing calculation cost versus desired accuracy), we present a systematic study on the accuracy of computed conformational relative energies and their geometries in comparison to MP2/CBS and MP2/AV5Z data, respectively. In order to do so, we introduce a new nomenclature to unambiguously indicate how a CBS extrapolation was computed. Nineteen minima and transition states of buta-1,3-diene, propan-2-ol and the water dimer were optimized using forty-five different basis sets. Specifically, this includes one Pople (i.e. 6-31G(d)), eight Dunning (i.e. VXZ and AVXZ, X=2-5), twenty-five Jensen (i.e. pc-n, pcseg-n, aug-pcseg-n, pcSseg-n and aug-pcSseg-n, n=0-4) and nine Karlsruhe (e.g. def2-SV(P), def2-QZVPPD) basis sets. The molecules were chosen to represent both common and electronically diverse molecular systems. In comparison to MP2/CBS relative energies computed using the largest Jensen basis sets (i.e. n=2,3,4), the use of smaller sizes (n=0,1,2 and n=1,2,3) provides results that are within 0.11--0.24 and 0.09-0.16 kcal/mol. To practically guide researchers in their basis set choice, an equation is introduced that ranks basis sets based on a user-defined balance between their accuracy and calculation cost. Furthermore, we explain why the aug-pcseg-2, def2-TZVPPD and def2-TZVP basis sets are very suitable choices to balance speed and accuracy.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers.
Die Wahrnehmung des perzeptionellen Aufrecht (perceptual upright, PU) variiert in Abhängigkeit der Gewichtung verschiedener gravitationsbezogener und körperbasierter Merkmale zwischen Kontexten und aufgrund individueller Unterschiede. Ziel des Vorhabens war es, systematisch zu untersuchen, welche Zusammenhänge zwischen visuellen und gravitationsbedingten Merkmalen bestehen. Das Vorhaben baute auf vorangegangen Untersuchungen auf, deren Ergebnisse indizieren, dass eine Gravitation von ca. 0,15g notwendig ist, um effiziente Selbstorientierungsinformationen bereit zu stellen (Herpers et. al, 2015; Harris et. al, 2014).
In dem hier beschriebenen Vorhaben wurden nun gezielt künstliche Gravitationsbedingungen berücksichtigt, um die Gravitationsschwelle, ab der ein wahrnehmbarer Einfluss beobachtbar ist, genauer zu quantifizieren bzw. die oben genannte Hypothese zu bestätigen. Es konnte gezeigt werden, dass die zentripetale Kraft, die auf einer rotierenden Zentrifuge entlang der Längsachse des Körpers wirkt, genauso efektiv wie Stehen mit normaler Schwerkraft ist, um das Gefühl des perzeptionellen Aufrechts auszulösen. Die erzielten Daten deuten zudem darauf hin, dass ein Gravitationsfeld von mindestens 0,15 g notwendig ist, um eine efektive Orientierungsinformation für die Wahrnehmung von Aufrecht zu liefern. Dies entspricht in etwa der Gravitationskraft von 0,17 g, die auf dem Mond besteht. Für eine lineare Beschleunigung des Körpers liegt der vestibulare Schwellenwert bei etwa 0,1 m/s2 und somit liegt der Wert für die Situation auf dem Mond von 1,6 m/s2 deutlich über diesem Schwellenwert.
Interactive Object Detection
(2019)
The success of state-of-the-art object detection methods depend heavily on the availability of a large amount of annotated image data. The raw image data available from various sources are abundant but non-annotated. Annotating image data is often costly, time-consuming or needs expert help. In this work, a new paradigm of learning called Active Learning is explored which uses user interaction to obtain annotations for a subset of the dataset. The goal of active learning is to achieve superior object detection performance with images that are annotated on demand. To realize active learning method, the trade-off between the effort to annotate (annotation cost) unlabeled data and the performance of object detection model is minimised.
Random Forests based method called Hough Forest is chosen as the object detection model and the annotation cost is calculated as the predicted false positive and false negative rate. The framework is successfully evaluated on two Computer Vision benchmark and two Carl Zeiss custom datasets. Also, an evaluation of RGB, HoG and Deep features for the task is presented.
Experimental results show that using Deep features with Hough Forest achieves the maximum performance. By employing Active Learning, it is demonstrated that performance comparable to the fully supervised setting can be achieved by annotating just 2.5% of the images. To this end, an annotation tool is developed for user interaction during Active Learning.
In mathematical modeling by means of performance models, the Fitness-Fatigue Model (FF-Model) is a common approach in sport and exercise science to study the training performance relationship. The FF-Model uses an initial basic level of performance and two antagonistic terms (for fitness and fatigue). By model calibration, parameters are adapted to the subject’s individual physical response to training load. Although the simulation of the recorded training data in most cases shows useful results when the model is calibrated and all parameters are adjusted, this method has two major difficulties. First, a fitted value as basic performance will usually be too high. Second, without modification, the model cannot be simply used for prediction. By rewriting the FF-Model such that effects of former training history can be analyzed separately – we call those terms preload – it is possible to close the gap between a more realistic initial performance level and an athlete's actual performance level without distorting other model parameters and increase model accuracy substantially. Fitting error of the preload-extended FF-Model is less than 32% compared to the error of the FF-Model without preloads. Prediction error of the preload-extended FF-Model is around 54% of the error of the FF-Model without preloads.
Quantifying Interference in WiLD Networks using Topography Data and Realistic Antenna Patterns
(2019)
Avoiding possible interference is a key aspect to maximize the performance in Wi-Fi based Long Distance networks. In this paper we quantify self-induced interference based on data derived from our testbed and match the findings against simulations. By enhancing current simulation models with two key elements we significantly reduce the deviation between testbed and simulation: the usage of detailed antenna patterns compared to the cone model and propagation modeling enhanced by license-free topography data. Based on the gathered data we discuss several possible optimization approaches such as physical separation of local radios, tuning the sensitivity of the transmitter and using centralized compared to distributed channel assignment algorithms. While our testbed is based on 5 GHz Wi-Fi, we briefly discuss the possible impact of our results to other frequency bands.
Bond graph software can simulate bond graph models without the user needing to manually derive equations. This offers the power to model larger and more complex systems than in the past. Multibond graphs (those with vector bonds) offer a compact model which further eases handling multibody systems. Although multibond graphs can be simulated successfully, the use of vector bonds can present difficulties. In addition, most qualitative, bond graph–based exploitation relies on the use of scalar bonds. This article discusses the main methods for simulating bond graphs of multibody systems, using a graphical software platform. The transformation between models with vector and scalar bonds is presented. The methods are then compared with respect to both time and accuracy, through simulation of two benchmark models. This article is a tutorial on the existing methods for simulating three-dimensional rigid and holonomic multibody systems using bond graphs and discusses the difficulties encountered. It then proposes and adapts methods for simulating this type of system directly from its bond graph within a software package. The value of this study is in giving practical guidance to modellers, so that they can implement the adapted method in software.
Verschiedene intelligente Heimautomatisierungsgeräte wie Lampen, Schlösser und Thermostate verbreiten sich rasant im privaten Umfeld. Ein typisches Kommunikationsprotokoll für diese Geräteklasse ist Bluetooth Low Energy (BLE). In dieser Arbeit wird eine strukturierte Sicherheitsanalyse für BLE vorgestellt. Die beschriebene Vorgehensweise kategorisiert bekannte Angriffsvektoren und beschreibt einen möglichen Aufbau für eine Analyse. Im Zuge dieser Arbeit wurden einige sicherheitsrelevante Probleme aufgedeckt, die es Angreifern ermöglichen die Geräte vollständig zu übernehmen. Es zeigte sich, dass im Standard vorgesehene Sicherheitsfunktionen wie Verschlüsselung und Integritätsprüfungen häufig gar nicht oder fehlerhaft implementiert sind.
More and more devices will be connected to the internet [3]. Many devicesare part of the so-called Internet of Things (IoT) which contains many low-powerdevices often powered by a battery. These devices mainly communicate with the manufacturers back-end and deliver personal data and secrets like passwords.
The choice of suitable semiconducting metal oxide (MOX) gas sensors for the detection of a specific gas or gas mixture is time-consuming since the sensor’s sensitivity needs to be characterized at multiple temperatures to find its optimal operating conditions. To obtain reliable measurement results, it is very important that the power for the sensor’s integrated heater is stable, regulated and error-free (or error-tolerant). Especially the error-free requirement can be only be achieved if the power supply implements failure-avoiding and failure-detection methods. The biggest challenge is deriving multiple different voltages from a common supply in an efficient way while keeping the system as small and lightweight as possible. This work presents a reliable, compact, embedded system that addresses the power supply requirements for fully automated simultaneous sensor characterization for up to 16 sensors at multiple temperatures. The system implements efficient (avg. 83.3% efficiency) voltage conversion with low ripple output (<32 mV) and supports static or temperature-cycled heating modes. Voltage and current of each channel are constantly monitored and regulated to guarantee reliable operation. To evaluate the proposed design, 16 sensors were screened. The results are shown in the experimental part of this work.
An essential measure of autonomy in service robots designed to assist humans is adaptivity to the various contexts of human-oriented tasks. These robots may have to frequently execute the same action, but subject to subtle variations in task parameters that determine optimal behaviour. Such actions are traditionally executed by robots using pre-determined, generic motions, but a better approach could utilize robot arm maneuverability to learn and execute different trajectories that work best in each context.
In this project, we explore a robot skill acquisition procedure that allows incorporating contextual knowledge, adjusting executions according to context, and improvement through experience, as a step towards more adaptive service robots. We propose an apprenticeship learning approach to achieving context-aware action generalisation on the task of robot-to-human object hand-over. The procedure combines learning from demonstration, with which a robot learns to imitate a demonstrator’s execution of the task, and a reinforcement learning strategy, which enables subsequent experiential learning of contextualized policies, guided by information about context that is integrated into the learning process. By extending the initial, static hand-over policy to a contextually adaptive one, the robot derives and executes variants of the demonstrated action that most appropriately suit the current context. We use dynamic movement primitives (DMPs) as compact motion representations, and a model-based Contextual Relative Entropy Policy Search (C-REPS) algorithm for learning policies that can specify hand-over position, trajectory shape, and execution speed, conditioned on context variables. Policies are learned using simulated task executions, before transferring them to the robot and evaluating emergent behaviours.
We demonstrate the algorithm’s ability to learn context-dependent hand-over positions, and new trajectories, guided by suitable reward functions, and show that the current DMP implementation limits learning context-dependent execution speeds. We additionally conduct a user study involving participants assuming different postures and receiving an object from the robot, which executes hand-overs by either exclusively imitating a demonstrated motion, or selecting hand-over positions based on learned contextual policies and adapting its motion accordingly. The results confirm the hypothesized improvements in the robot’s perceived behaviour when it is context-aware and adaptive, and provide useful insights that can inform future developments.