Refine
H-BRS Bibliography
- yes (89) (remove)
Departments, institutes and facilities
- Fachbereich Informatik (89) (remove)
Document Type
- Conference Object (45)
- Article (16)
- Report (10)
- Part of a Book (6)
- Preprint (4)
- Doctoral Thesis (3)
- Book (monograph, edited volume) (2)
- Conference Proceedings (2)
- Research Data (1)
Year of publication
- 2017 (89) (remove)
Keywords
- Aerodynamics (2)
- Computer Graphics (2)
- Cooperative Awareness Message (2)
- Cutting sticks-Problem (2)
- Mengenpartitionsproblem (2)
- Object recognition (2)
- OpenFlow (2)
- Pseudonym Concept (2)
- Ray Tracing (2)
- SpMV (2)
- Teilsummenaufteilung (2)
- Virtual Reality (2)
- domestic robots (2)
- foveated rendering (2)
- robot competitions (2)
- robotics (2)
- surrogate modeling (2)
- 3D design (1)
- 3D user interfaces (1)
- AAL-Technik (1)
- ABT-737 (1)
- ALPS (1)
- Active locomotion (1)
- Adaptation of Software (1)
- Adaptive Case Management (1)
- Alkane (1)
- Altenpflege (1)
- Alternatives (1)
- Analyse (1)
- Architectural Patterns (1)
- Autonomous Systems (1)
- Autotuning (1)
- BCL2 (1)
- BH3-mimetic inhibitor (1)
- BPMS (1)
- Benchmarking (1)
- Blocking (1)
- Bond graphs (1)
- Bubble-Chart (1)
- CMMN (1)
- CPU (1)
- CSR5BC (1)
- Capability framework (1)
- Case-Based Reasoning (1)
- Challenges (1)
- Coloured pointclouds (1)
- Complex Event Processing (1)
- Concurrent Kleene Algebra (1)
- Containerization (1)
- Coupled process (1)
- Curriculum (1)
- Cutting sticks problem (1)
- Deklarative Prozessmodellierung (1)
- Demenz (1)
- Disco (1)
- Distributed rendering (1)
- Docker (1)
- Domain-Specific Languages (1)
- Domestic robotics (1)
- Domestic robots (1)
- Dynamic Case Management (1)
- Educational Science (1)
- Electric mobility (1)
- EnOcean (1)
- Evolutionary algorithms (1)
- Eye Tracking (1)
- FS20 (1)
- Fault accommodation (1)
- Fault diagnosis (1)
- Fault simulation (1)
- Focus plus context (1)
- Functional Programming (1)
- Fusion (1)
- Fuzzy Miner (1)
- GPU (1)
- Gabor filters (1)
- Games (1)
- Gaze Behavior (1)
- Gefühl (1)
- Genetic algorithm (1)
- HCSS (1)
- Head-mounted Display (1)
- HomeMatic (1)
- Humanoider Roboter (1)
- Hybrid models (1)
- ICF (1)
- IEEE 802.11 (1)
- ISO9999 (1)
- Illumination algorithms (1)
- Immersion (1)
- Inductive Visual Miner (1)
- Industrial robots (1)
- Input reconstruction (1)
- Intelligent Transport System (1)
- Intelligent virtual agents (1)
- Interaktion (1)
- Inventory (1)
- Inverse simulation (1)
- KNX (1)
- LBP (1)
- LDP (1)
- LGCSR (1)
- Laws of programming (1)
- MAP-Elites (1)
- METEOR score (1)
- MP2.5 (1)
- Machine learning (1)
- Maximal covering location problem (1)
- Mensch-Maschine-Kommunikation (1)
- Methodologies (1)
- Modalities (1)
- Model-Based Software Development (1)
- Model-based fault detection and isolation (1)
- Model-driven Development (1)
- Modelica (1)
- Multi-Tenant Application (1)
- Multi-objective (1)
- Multi-stage (1)
- Multimodal (1)
- Multisensory cues (1)
- Multiuser (1)
- NETCONF (1)
- Open innovation (1)
- OpenDaylight (1)
- Optimisation 3D (1)
- Optimization (1)
- Outside-in process (1)
- PAIS (1)
- Pflegeinformatik (1)
- Presence (1)
- Privacy (1)
- ProM (1)
- Process Mining (1)
- Quality Diversity (1)
- RGB-D data (1)
- RapidMiner (1)
- Ray tracing (1)
- Refinement (1)
- Rendering (1)
- Requirements analysis (1)
- Robot competitions (1)
- Robotics (1)
- SDN (1)
- STAT3 (1)
- Scene understanding through Deep Learning (1)
- Semantic models (1)
- Serious Games (1)
- Service-Oriented Architecture (1)
- Set partition problem (1)
- Single-objective (1)
- Social Virtual Reality (1)
- Software and Architecture (1)
- Sparse Matrix Vector Multiplication (1)
- Sparse matrix format (1)
- Studienverlauf (1)
- Surrogate Modelling (1)
- Synthetic perception (1)
- Textureless objects (1)
- Tiled-display walls (1)
- Touchscreen interaction (1)
- Trace algebra (1)
- Transparency (1)
- UAV teleoperation (1)
- Unifying theories (1)
- Usability (1)
- User Interface Design (1)
- User engagement (1)
- VR system design (1)
- Variability Management (1)
- Vector Units (1)
- Vehicle-2-Vehicle Communication (1)
- Vehicle-to-Vehicle Communication (1)
- Vehicular Ad hoc Networks (1)
- Virtual Environments (1)
- Virtual attention (1)
- Virtual reality (1)
- WfMS (1)
- WiFi (1)
- WiFi-based Long Distance (WiLD) (1)
- Wireless backhaul (1)
- Wissensarbeit (1)
- Wissensintensive Geschäftsprozesse (1)
- Xeon Phi knights landing (1)
- YANG (1)
- ZWave (1)
- ZigBee (1)
- activation function (1)
- apoptosis (1)
- assistive robots (1)
- bagging (1)
- benchmarking (1)
- bloat (1)
- bootstrapping (1)
- building automation (1)
- data analysis (1)
- diagnostic bond graphs (1)
- directional antennas (1)
- evolutionary illumination (1)
- exploration (1)
- eye movement (1)
- eye tracking (1)
- eye-tracking (1)
- facial expression recognition (1)
- fuel (1)
- gaze (1)
- generation of ARRs (1)
- generative design (1)
- haptics (1)
- heterogeneous networks (1)
- hydrocarbon (1)
- ideal switches (1)
- image captioning (1)
- industrial robots (1)
- interference (1)
- lipid (1)
- mobility assistance system (1)
- mode switching LTI models (1)
- motion cueing (1)
- natural user interface (1)
- neuroevolution (1)
- octane (1)
- parallel difference visualization (1)
- perceived quality (1)
- person and object detection and recognition (1)
- power electronic systems (1)
- projection based systems (1)
- quantum mechanics (1)
- region of interest (1)
- regression (1)
- remote-controlled robots (1)
- routing (1)
- scene-segmentation (1)
- security (1)
- semantic mapping (1)
- software-defined networking (1)
- spatial augmented reality (1)
- speech understanding (1)
- taxonomie (1)
- technology mapping (1)
- virtual locomotion (1)
- virtual reality (1)
- visuohaptic feedback (1)
- wireless mesh networks (1)
- zooming interfaces (1)
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
Maßgefertigte Abläufe
(2017)
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
This paper proposes a novel approach to the generation of state equations from a bond graph (BG) of a mode switching linear time invariant model. Fast state transitions are modelled by ideal or non-ideal switches. Fixed causalities are assigned following the Standard Causality Assignment Procedure such that the number of storage elements in integral causality is maximised. A system of differential and algebraic equations (DAEs) is derived from the BG that holds for all system modes. It is distinguished between storage elements with mode independent causality and those that change causality due to switch state changes.
Wo Laborexperimente zu aufwendig, zu teuer, zu langsam oder zu gefährlich oder Stoffeigenschaften gar nicht erst experimentell zugänglich sind, können Computersimulationen von Atomen und Molekülen diese ersetzen oder ergänzen. Sie ermöglichen dadurch Reduktion von Kosten, Entwicklungszeit und Materialeinsatz. Die für diese Simulationen benötigten Molekülmodelle beinhalten zahlreiche Parameter, die der Simulant einstellen oder auswählen muss. Eine passende Parametrierung ist nur bei entsprechenden Kenntnissen über die Auswirkungen der Parameter auf die zu berechnenden Größen und Eigenschaften möglich. Eine Gruppe von Standardparametern in molekularen Simulationen sind die Partialladungen der einzelnen Atome innerhalb eines Moleküls. Die räumliche Ladungsverteilung innerhalb des Moleküls wird durch Punktladungen auf den Atomzentren angenähert. Für diese Annäherung existieren diverse Ansätze für verschiedene Molekülklassen und Anwendungen. In diesem Teilprojekt des Promotionsvorhabens wurde systematisch der Einfluss der Wahl des Partialladungssatzes auf potentielle Energien und ausgewählte makroskopische Eigenschaften aus Molekulardynamik-Simulationen evaluiert. Es konnte gezeigt werden, dass insbesondere bei stark polaren Molekülen die Auswahl des geeigneten Partialladungssatzes entscheidenden Einfluss auf die Simulationsergebnisse hat und daher nicht naiv, sondern nur ganz gezielt getroffen werden darf.
In diesem Artikel wird darüber berichtet, ob die Glaubwürdigkeit von Avataren als mögliches Modulationskriterium für die virtuelle Expositionstherapie von Agoraphobie in Frage kommt. Dafür werden mehrere Glaubwürdigkeitsstufen für Avatare, die hypothetisch einen Einfluss auf die virtuelle Expositionstherapie von Agoraphobie haben könnten sowie ein potentielles Expositionsszenario entwickelt. Die Arbeit kann innerhalb einer Studie einen signifikanten Einfluss der Glaubwürdigkeitsstufen auf Präsenz, Kopräsenz und Realismus aufzeigen.
Diese Arbeit beschäftigt sich mit der Effizienz der Seitenkanal-Kryptanalyse. In Teil II dieser Arbeit demonstrieren wir, wie die Laufzeit der wichtigsten Analysewerkzeuge mit Hilfe der CUDA Plattform erheblich gesteigert werden kann. Zweitens untersuchen wir neue Ansätze der profilierenden Seitenkanal-Kryptanalyse. Der Forschungszweig des maschinellen Lernens kann für deutliche Verbesserungen adaptiert werden, wurde jedoch wenig dahingehend untersucht. In Teil III dieser Arbeit präsentieren wir zwei neue Methoden, die einige Gemeinsamkeiten jedoch auch einige Unterschiede aufbieten, sodass sich Prüfergebnisse in einem vollständigeren Bild zeigen lassen. Darüber hinaus schlagen wir in Teil IV eine Seitenkanalanwendung zum Schutz geistigen Eigentums (IP) vor. In Teil V beschäftigen wir uns tiefergehend mit praktischer Seitenkanal-Kryptanalyse, indem wir Attacken auf einen Sicherheitsmikrokontroller durchführen, der Anwendung in einer, in Deutschland weit verbreiteten, EC Karte findet.
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
Human butyrylcholinesterase (BChE) is a glycoprotein capable of bioscavenging toxic compounds such as organophosphorus (OP) nerve agents. For commercial production of BChE, it is practical to synthesize BChE in non-human expression systems, such as plants or animals. However, the glycosylation profile in these systems is significantly different from the human glycosylation profile, which could result in changes in BChE's structure and function. From our investigation, we found that the glycan attached to ASN241 is both structurally and functionally important due to its close proximity to the BChE tetramerization domain and the active site gorge. To investigate the effects of populating glycosylation site ASN241, monomeric human BChE glycoforms were simulated with and without site ASN241 glycosylated. Our simulations indicate that the structure and function of human BChE are significantly affected by the absence of glycan 241.
Current robot platforms are being employed to collaborate with humans in a wide range of domestic and industrial tasks. These environments require autonomous systems that are able to classify and communicate anomalous situations such as fires, injured persons, car accidents; or generally, any potentially dangerous situation for humans. In this paper we introduce an anomaly detection dataset for the purpose of robot applications as well as the design and implementation of a deep learning architecture that classifies and describes dangerous situations using only a single image as input. We report a classification accuracy of 97 % and METEOR score of 16.2. We will make the dataset publicly available after this paper is accepted.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.
The Sparse Matrix Vector Multiplication is an important operation on sparse matrices. This operation is the most time consuming operation in iterative solvers and therefore an efficient execution of that operation is of great importance for many applications. Numerous different storage formats that store sparse matrices efficiently have already been established. Often, these storage formats utilize the sparsity pattern of a matrix in an appropiate manner. For one class of sparse matrices the nonzero values occur in small dense blocks and appropriate block storage formats are well suited for such patterns. But on the other side, these formats perform often poor on general matrices without an explicit / regular block structure. In this paper, the newly developed sparse matrix format DynB is introduced. The aim is to efficiently use several optimization approaches and vectorization with current processors, even for matrices without an explicit block structure of nonzero elements. The DynB matrix format uses 2D rectangular blocks of variable size, allowing fill-ins per block of explicit zero values up to a user controllable threshold. We give a simple and fast heuristic to detect such 2D blocks in a sparse matrix. The performance of the Sparse Matrix Vector Multiplication for a selection of different block formats and matrices with different sparsity structures is compared. Results show that the benefit of blocking formats depend – as to be expected – on the structure of the matrix and that variable sized block formats like DynB can have advantages over fixed size formats and deliver good performance results even for general sparse matrices.
In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository.
Studienverläufe von Studenten weichen nicht selten vom offiziell geplanten Curriculum ab. Für eine den Studienerfolg verbessernde Planung und Weiterentwicklung von Studiengängen und Curricula fehlen den Verantwortlichen häufig Erkenntnisse über tatsächliche sowie typischerweise erfolgreiche und weniger erfolgreiche Studienverlaufsmuster. Process-Mining-Techniken können helfen, mehr Transparenz bei der Auswertung von Studienverläufen zu schaffen und so die Erkennung typischer Studienverlaufsmuster, die Überprüfung der Übereinstimmung der konkreten Studienverläufe mit dem vorgegebenen Curriculum sowie eine zielgerechte Verbesserung des Curriculums zu unterstützen.
In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain.
In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.
Real-World Performance of current Mesh Protocols in a small-scale Dual-Radio Multi-Link Environment
(2017)
Two key questions motivated the work in this paper: What is the impact of different usage schemes for multiple channels in a dual-radio Wireless Mesh Network (WMN), and what is the impact of some popular WMN routing protocols on its performance. These two questions were evaluated in a small and simple real-world scenario. A major concern was reproducibility of the results. We show that it is beneficial to use both radios on different frequencies in a fully meshed environment with four routers. The routing protocols Babel, B.A.T.M.A.N. V, BMX7 and OLSRv2 recognize a saturated channel and prefer the other one. We show that in our scenario all of the protocols perform equally well since the protocol overhead is comparably low not influencing the overall performance of the network.
We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
Simulating eye movements for virtual humans or avatars can improve social experiences in virtual reality (VR) games, especially when wearing head mounted displays. While other researchers have already demonstrated the importance of simulating meaningful eye movements, we compare three gaze models with different levels of fidelity regarding realism: (1) a base model with static fixation and saccadic movements, (2) a proposed simulation model that extends the saccadic model with gaze shifts based on a neural network, and (3) a user's real eye movements recorded by a proprietary eye tracker. Our between-groups design study with 42 subjects evaluates impact of eye movements on social VR user experience regarding perceived quality of communication and presence. The tasks include free conversation and two guessing games in a co-located setting. Results indicate that a high quality of communication in co-located VR can be achieved without using extended gaze behavior models besides saccadic simulation. Users might have to gain more experience with VR technology before being able to notice subtle details in gaze animation. In the future, remote VR collaboration involving different tasks requires further investigation.
Populating virtual worlds with intelligent agents can drastically improve a user's sense of presence. Applying these worlds to virtual training, simulations, or (serious) games, often requires multiple agents to be simulated in real time. The process of generating believable agent behavior starts with providing a plausible perception and attention process that is both efficient and controllable. We describe a conceptual framework for synthetic perception that specifically considers the mentioned requirements: plausibility, real-time performance, and controllability. A sample implementation will focus on sensing, attention, and memory to demonstrate the framework's capabilities in a real-time game engine scenario. A combination of dynamic geometric sensing and false coloring with static saliency information is provided to exemplify the collection of environmental stimuli. The subsequent attention process handles both bottom-up processing and task-oriented, top-down factors. Behavioral results can be influenced by controlling memory and attention The example case is demonstrated and discussed alongside future extensions.
Integration of Multi-modal Cues in Synthetic Attention Processes to Drive Virtual Agent Behavior
(2017)
Service robots performing complex tasks involving people in houses or public environments are becoming more and more common, and there is a huge interest from both the research and the industrial point of view. The RoCKIn@Home challenge has been designed to compare and evaluate different approaches and solutions to tasks related to the development of domestic and service robots. RoCKIn@Home competitions have been designed and executed according to the benchmarking methodology developed during the project and received very positive feedbacks from the participating teams. Tasks and functionality benchmarks are explained in detail.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
A deployment of the Vehicle-2-Vehicle communication technology according to ETSI is in preparation in Europe. Currently, a policy for a necessary Public Key Infrastructure to enrol cryptographic keys and certificates for vehicles and infrastructure component is in discussion to enable an interoperable Vehicle-2-Vehicle communication. Vehicle-2-Vehicle communication means that vehicles periodically send Cooperative Awareness Messages. These messages contain the current geographic position, driving direction, speed, acceleration, and the current time of a vehicle. To protect privacy (location privacy, “speed privacy”) of vehicles and drivers ETSI provides a specific pseudonym concept. We show that the Vehicle-2-Vehicle communication can be misused by an attacker to plot a trace of sequent Cooperative Awareness Messages and to link this trace to a specific vehicle. Such a trace is non-disputable due to the cryptographic signing of the messages. So, the periodically sending of Cooperative Awareness Messages causes privacy problems even if the pseudonym concept is applied.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden für eine Reihe von Instanzen effiziente Lösungen angegeben.
Das Cutting sticks-Problem ist in seiner allgemeinen Formulierung ein NP-vollständiges Problem mit Anwendungspotenzialen im Bereich der Logistik. Unter der Annahme, dass P ungleich NP (P != NP) ist, existieren keine effizienten, d.h. polynomiellen Algorithmen zur Lösung des allgemeinen Problems.
In diesem Papier werden Ansätze aufgezeigt, mit denen bestimmte Instanzen des Problems effizient berechnet werden können. Für die Berechnung wichtige Parameter werden charakterisiert und deren Beziehung untereinander analysiert.
The combination of Software-Defined Networking (SDN) and Wireless Mesh Network (WMN) is challenging due to the different natures of both concepts. SDN describes networks with homogeneous, static and centralized controlled topologies. In contrast, a WMN is characterized by a dynamic and distributed network control, and adds new challenges with respect to time-critical operation. However, SDN and WMN are both associated with decreasing the operational costs for communication networks which is especially beneficial for internet provisioning in rural areas. This work surveys the current status for Software-Defined Wireless Mesh Networking. Besides a general overview in the domain of wireless SDN, this work focuses especially on different identified aspects: representing and controlling wireless interfaces, control-plane connection and topology discovery, modulation and coding, routing and load-balancing and client handling. A complete overview of surveyed solutions, open issues and new research directions is provided with regard to each aspect.
Infection Exposure Promotes ETV6-RUNX1 Precursor B-cell Leukemia via Impaired H3K4 Demethylases
(2017)
ETV6-RUNX1 is associated with the most common subtype of childhood leukemia. As few ETV6-RUNX1 carriers develop precursor B cell acute lymphocytic leukemia (pB-ALL), the underlying genetic basis for development of full-blown leukemia remains to be identified, but the appearance of leukemia cases in time-space clusters keeps infection as a potential causal factor. Here we present in vivo genetic evidence mechanistically connecting preleukemic ETV6-RUNX1 expression in hematopoetic stem cells/peripheral cells (HSC/PC) and postnatal infections for human-like pB-ALL. In our model, ETV6-RUNX1 conferred a low risk of developing pB-ALL after exposure to common pathogens, corroborating the low incidence observed in humans. Murine preleukemic ETV6-RUNX1 pro/preB cells showed high Rag1/2 expression, known for human ETV6-RUNX1 pB-ALL. Murine and human ETV6-RUNX1 pB-ALL revealed recurrent genomic alterations, with a relevant proportion affecting genes of the lysine demethylase (KDM) family. KDM5C loss-of-function resulted in increased levels of H3K4me3, which co-precipitated with RAG2 in a human cell line model, laying the molecular basis for recombination activity. We conclude that alterations of KDM family members represent a disease-driving mechanism and an explanation for RAG off-target cleavage observed in humans. Our results explain the genetic basis for clonal evolution of an ETV6-RUNX1 preleukemic clone to pB-ALL after infection exposure and offer the possibility of novel therapeutic approaches.
OpenDaylight (ODL) is a commercial, collaborative, open-source platform to accelerate the adoption and innovation of Software Defined Networking (SDN) and Network Function Visualization. This paper describes the novel ODL architecture in a simplified way to grasp a better understanding of such architecture. ODL architecture intends to foster new innovation and accelerate adoption of programming the network. The innovation of Model-Driven Service Abstraction Layer (MD-SAL) in the architecture leads to developing models for automatic management and configuration of the networks. MD-SAL provides ODL with the ability to support any protocol talking to the network elements as well as any network application. The flexibility inherent in ODL architecture could enable ODL to shape the next generation networks.
Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly.
Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups.
In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness.
The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction.
Population ageing and growing prevalence of disability have resulted in a growing need for personal care and assistance. The insufficient supply of personal care workers and the rising costs of long-term care have turned this phenomenon into a greater social concern. This has resulted in a growing interest in assistive technology in general, and assistive robots in particular, as a means of substituting or supplementing the care provided by humans, and as a means of increasing the independence and overall quality of life of persons with special needs. Although many assistive robots have been developed in research labs world-wide, very few are commercially available. One of the reasons for this, is the cost. One way of optimising cost is to develop solutions that address specific needs of users. As a precursor to this, it is important to identify gaps between what the users need and what the technology (assistive robots) currently provides. This information is obtained through technology mapping.
The current literature lacks a mapping between user needs and assistive robots, at the level of individual systems. The user needs are not expressed in uniform terminology across studies, which makes comparison of results difficult. In this research work, we have illustrated the technology mapping of assistive robots using the International Classification of Functioning, Disability and Health (ICF). ICF provides standard terminology for expressing user needs in detail. Expressing the assistive functions of robots also in ICF terminology facilitates communication between different stakeholders (rehabilitation professionals, robotics researchers, etc.).
We also investigated existing taxonomies for assistive robots. It was observed that there is no widely accepted taxonomy for classifying assistive robots. However, there exists an international standard, ISO 9999, which classifies commercially available assistive products. The applicability of the latest revision of ISO 9999 standard for classifying mobility assistance robots has been studied. A partial classification of assistive robots based on ISO 9999 is suggested. The taxonomy and technology mapping are illustrated with the help of four robots that have the potential to provide mobility assistance. These are the SmartCane, the SmartWalker, MAid and Care-O-bot (R) 3. SmartCane, SmartWalker and MAid provide assistance by supporting physical movement. Care-O-bot (R) 3 provides assistance by reducing the need to move.
Recent work in image captioning and scene-segmentation has shown significant results in the context of scene-understanding. However, most of these developments have not been extrapolated to research areas such as robotics. In this work we review the current state-ofthe- art models, datasets and metrics in image captioning and scenesegmentation. We introduce an anomaly detection dataset for the purpose of robotic applications, and we present a deep learning architecture that describes and classifies anomalous situations. We report a METEOR score of 16.2 and a classification accuracy of 97 %.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique to 'illuminate' the problem space through the lens of chosen features has the potential to be a powerful tool for exploring design spaces, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination (SAIL) algorithm, introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high-performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.
A new method for design space exploration and optimization, Surrogate-Assisted Illumination (SAIL), is presented. Inspired by robotics techniques designed to produce diverse repertoires of behaviors for use in damage recovery, SAIL produces diverse designs that vary according to features specified by the designer. By producing high-performing designs with varied combinations of user-defined features a map of the design space is created. This map illuminates the relationship between the chosen features and performance, and can aid designers in identifying promising design concepts. SAIL is designed for use with compu-tationally expensive design problems, such as fluid or structural dynamics, and integrates approximative models and intelligent sampling of the objective function to minimize the number of function evaluations required. On a 2D airfoil optimization problem SAIL is shown to produce hundreds of diverse designs which perform competitively with those found by state-of-the-art black box optimization. Its capabilities are further illustrated in a more expensive 3D aerodynamic optimization task.
WiFi-based Long Distance (WiLD) networks have emerged as a promising alternative approach for Internet in rural areas. The main hardware components of these networks are commercial off-the-shelf WiFi radios and directional antennas. During our experiences with real-world WiLD networks, we encountered that interference among long-distance links is a major issue even with high gain directional antennas. In this work, we are providing an in-depth analysis of these interference effects by conducting simulations in ns-3. To closely match the real-world interference effects, we implemented a module to load radiation pattern of commonly used antennas. We analyze two different interference scenarios typically present as a part of larger networks. The results show that side-lobes of directional antennas significantly influence the throughput of long-distance WiFi links depending on the orientation. This work emphasizes that the usage of simple directional antenna models needs to be considered carefully.
Evolutionary illumination is a recent technique that allows producing many diverse, optimal solutions in a map of manually defined features. To support the large amount of objective function evaluations, surrogate model assistance was recently introduced. Illumination models need to represent many more, diverse optimal regions than classical surrogate models. In this PhD thesis, we propose to decompose the sample set, decreasing model complexity, by hierarchically segmenting the training set according to their coordinates in feature space. An ensemble of diverse models can then be trained to serve as a surrogate to illumination.
Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, an important factor when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced using NEAT with uniform activation in all nodes, or homogenous networks, is compared to networks which contain a mixture of activation functions, or heterogenous networks. For a number of regression and classification benchmarks it is shown that, (1) qualitatively different activation functions lead to different results in homogeneous networks, (2) the heterogeneous version of NEAT is able to select well performing activation functions, (3) the produced heterogeneous networks are significantly smaller than homogeneous networks.
While executing actions, service robots may experience external faults because of insufficient knowledge about the actions' preconditions. The possibility of encountering such faults can be minimised if symbolic and geometric precondition models are combined into a representation that specifies how and where actions should be executed. This work investigates the problem of learning such action execution models and the manner in which those models can be generalised. In particular, we develop a template-based representation of execution models, which we call delta models, and describe how symbolic template representations and geometric success probability distributions can be combined for generalising the templates beyond the problem instances on which they are created. Our experimental analysis, which is performed with two physical robot platforms, shows that delta models can describe execution-specific knowledge reliably, thus serving as a viable model for avoiding the occurrence of external faults.
From video games to mobile augmented reality, 3D interaction is everywhere. But simply choosing to use 3D input or 3D displays isn't enough: 3D user interfaces (3D UIs) must be carefully designed for optimal user experience. 3D User Interfaces: Theory and Practice, Second Edition is today's most comprehensive primary reference to building outstanding 3D UIs. Four pioneers in 3D user interface research and practice have extensively expanded and updated this book, making it today's definitive source for all things related to state-of-the-art 3D interaction.
This paper describes the security mechanisms of several wireless building automation technologies, namely ZigBee, EnOcean, ZWave, KNX, FS20, and Home-Matic. It is shown that none of the technologies provides the necessary measure ofsecurity that should be expected in building automation systems. One of the conclusions drawn is that software embedded in systems that are build for a lifetime of twenty years or more needs to be updatable.
The knowledge of Software Features (SFs) is vital for software developers and requirements specialists during all software engineering phases: to understand and derive software requirements, to plan and prioritize implementation tasks, to update documentation, or to test whether the final product correctly implements the requested SF. In most software projects, SFs are managed in conjunction with other information such as bug reports, programming tasks, or refactoring tasks with the aid of Issue Tracking Systems (ITSs). Hence ITSs contains a variety of information that is only partly related to SFs. In practice, however, the usage of ITSs to store SFs comes with two major problems: (1) ITSs are neither designed nor used as documentation systems. Therefore, the data inside an ITS is often uncategorized and SF descriptions are concealed in rather lengthy. (2) Although an SF is often requested in a single sentence, related information can be scattered among many issues. E.g. implementation tasks related to an SF are often reported in additional issues. Hence, the detection of SFs in ITSs is complicated: a manual search for the SFs implies reading, understanding and exploiting the Natural Language (NL) in many issues in detail. This is cumbersome and labor intensive, especially if related information is spread over more than one issue. This thesis investigates whether SF detection can be supported automatically. First the problem is analyzed: (i) An empirical study shows that requests for important SFs reside in ITSs, making ITSs a good tar- get for SF detection. (ii) A second study identifies characteristics of the information and related NL in issues. These characteristics repre- sent opportunities as well as challenges for the automatic detection of SFs. Based on these problem studies, the Issue Tracking Software Feature Detection Method (ITSoFD), is proposed. The method has two main components and includes an approach to preprocess issues. Both components address one of the problems associated with storing SFs in ITSs. ITSoFD is validated in three solution studies: (I) An empirical study researches how NL that describes SFs can be detected with techniques from Natural Language Processing (NLP) and Machine Learning. Issues are parsed and different characteristics of the issue and its NL are extracted. These characteristics are used to clas- sify the issue’s content and identify SF description candidates, thereby approaching problem (1). (II) An empirical study researches how issues that carry information potentially related to an SF can be detected with techniques from NLP and Information Retrieval. Characteristics of the issue’s NL are utilized to create a traceability network vii of related issues, thereby approaching problem (2). (III) An empirical study researches how NL data in issues can be preprocessed using heuristics and hierarchical clustering. Code, stack traces, and other technical information is separated from NL. Heuristics are used to identify candidates for technical information and clustering improves the heuristic’s results. The technique can be applied to support components, I. and II.
p53 is a crucial regulator of cell response to DNA damage. MDM4 and MDM2 are the two main negative regulators of p53 activity. Upon DNA damage, their constraint is released and p53 becomes activated and exerts its safeguard function by arresting cell growth or by killing excessively damaged cells. Under these conditions, increasing data suggest that MDM4 and MDM2 play novel roles. In this respect, we recently published that MDM4 exerts a positive activity towards p53 mitochondrial apoptosis. We observed that a fraction of MDM4 stably localizes at the mitochondria where upon lethal stress conditions, promotes the mitochondrial localization of p53 phosphorylated at Ser46 (p53Ser46(P)) and facilitates its binding to BCL2, cytochrome C release and apoptosis. Most importantly, we observed a correlation of MDM4 expression with cisplatin-resistance in a group of human ovarian cancers suggesting that MDM4 proapoptotic activity may have in vivo relevance. Here, we discuss about these and some new findings and compare them with previous data trying to settle some apparent contradictions. In addition, this review discusses the potential relevance of our data to the field of human cancer.
Exploring Gridmap-based Interfaces for the Remote Control of UAVs under Bandwidth Limitations
(2017)
This book constitutes the thoroughly refereed post-conference proceedings of the 15th International Conference on Smart Card Research and Advanced Applications, CARDIS 2016, held in Cannes, France, in November 2016. The 15 revised full papers presented in this book were carefully reviewed and selected from 29 submissions. The focus of the conference was on all aspects of the design, development, deployment, validation, and application of smart cards or smart personal devices.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user. This technique has the potential to be a powerful tool for design space exploration, but is limited by the need for numerous evaluations. The Surrogate-Assisted Illumination algorithm (SAIL), introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.
The ability of SAIL to efficiently produce both accurate models and diverse high performing solutions is illustrated on a 2D airfoil design problem. The search space is divided into bins, each holding a design with a different combination of features. In each bin SAIL produces a better performing solution than MAP-Elites, and requires several orders of magnitude fewer evaluations. The CMA-ES algorithm was used to produce an optimal design in each bin: with the same number of evaluations required by CMA-ES to find a near-optimal solution in a single bin, SAIL finds solutions of similar quality in every bin.
RPSL meets lightning: A model-based approach to design space exploration of robot perception systems
(2017)
Tracelets and Specifications
(2017)
In the accompanying paper [1] the authors study a model of concurrent programs in terms of events and a dependence relation, i.e., a set of arrows, between them. There also two simplifying interface models are presented; they abstract in different ways from the intricate network of internal points and arrows of program components. This report supplements [1] by presenting full proofs for the properties of the interface models, in particular, that both models exhibit homomorphic behaviour w.r.t. sequential and concurrent composition. [1] B. Möller, C.A.R. Hoare, M.E. Müller, G. Struth: A discrete geometric model of concurrent program execution. In H. Zhu, J. Bowen: Proc. UTP 16. LNCS 10134. Springer 2017, 1-25
A plethora of architectural patterns and elements for developing service-oriented applications can be gathered from the state-of-the-art. Most of these approaches are merely applicable for single-tenant applications. However, less methodical support is provided for scenarios, in which multiple different tenants with varying requirements access the same application stack concurrently. In order to fill this gap, both novel and existing architectural patterns, architectural elements, as well as fundamental design decisions must be considered and integrated into a framework that leverages the devel- opment of multi-tenant application. This paper addresses this demand and presents the SOAdapt framework. It promotes the development of adaptable multi-tenant applications based on a service-oriented architecture that is capable of incorporating specific requirements of new tenants in a flexible manner.
Systemunterstützung für wissensintensive Geschäftsprozesse – Konzepte und Implementierungsansätze
(2017)
This book presents theory and latest application work in Bond Graph methodology with a focus on:
Hybrid dynamical system models, Model-based fault diagnosis, model-based fault tolerant control, fault prognosis and also addresses Open thermodynamic systems with compressible fluid flow, and Distributed parameter models of mechanical subsystems.
In addition, the book covers various applications of current interest ranging from motorised wheelchairs, in-vivo surgery robots, walking machines to wind-turbines.The up-to-date presentation has been made possible by experts who are active members of the worldwide bond graph modelling community.
This book is the completely revised 2nd edition of the 2011 Springer compilation text titled Bond Graph Modelling of Engineering Systems – Theory, Applications and Software Support. It extends the presentation of theory and applications of graph methodology by new developments and latest research results.
Like the first edition, this book addresses readers in academia as well as practitioners in industry and invites experts in related fields to consider the potential and the state-of-the-art of bond graph modelling.