Refine
H-BRS Bibliography
- yes (31)
Departments, institutes and facilities
- Fachbereich Informatik (31) (remove)
Document Type
- Part of a Book (31) (remove)
Year of publication
Language
- English (31) (remove)
Keywords
- domestic robots (2)
- robot competitions (2)
- Adaptive Case Management (1)
- BPMS (1)
- Bond graphs (1)
- Business Case (1)
- CMMN (1)
- Case-Based Reasoning (1)
- Cleaning Task (1)
- Complex Event Processing (1)
The European General Data Protection Regulation requires the implementation of Technical and Organizational Measures (TOMs) to reduce the risk of illegitimate processing of personal data. For these measures to be effective, they must be applied correctly by employees who process personal data under the authority of their organization. However, even data processing employees often have limited knowledge of data protection policies and regulations, which increases the likelihood of misconduct and privacy breaches. To lower the likelihood of unintentional privacy breaches, TOMs must be developed with employees’ needs, capabilities, and usability requirements in mind. To reduce implementation costs and help organizations and IT engineers with the implementation, privacy patterns have proven to be effective for this purpose. In this chapter, we introduce the privacy pattern Data Cart, which specifically helps to develop TOMs for data processing employees. Based on a user-centered design approach with employees from two public organizations in Germany, we present a concept that illustrates how Privacy by Design can be effectively implemented. Organizations, IT engineers, and researchers will gain insight on how to improve the usability of privacy-compliant tools for managing personal data.
Users should always play a central role in the development of (software) solutions. The human-centered design (HCD) process in the ISO 9241-210 standard proposes a procedure for systematically involving users. However, due to its abstraction level, the HCD process provides little guidance for how it should be implemented in practice. In this chapter, we propose three concrete practical methods that enable the reader to develop usable security and privacy (USP) solutions using the HCD process. This chapter equips the reader with the procedural knowledge and recommendations to: (1) derive mental models with regard to security and privacy, (2) analyze USP needs and privacy-related requirements, and (3) collect user characteristics on privacy and structure them by user group profiles and into privacy personas. Together, these approaches help to design measures for a user-friendly implementation of security and privacy measures based on a firm understanding of the key stakeholders.
Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from so-called safety concerns, properties that preclude their deployment as no argument or experimental setup can help to assess the remaining risk. In recent years, an abundance of state-of-the-art techniques aiming to address these safety concerns has emerged. This chapter provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our work addresses machine learning experts and safety engineers alike: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern machine learning methods. We hope that this contribution fuels discussions on desiderata for machine learning systems and strategies on how to help to advance existing approaches accordingly.
Domestic Robotics
(2016)
Domestic Robotics
(2008)
Improving the Performance of Parallel SpMV Operations on NUMA Systems with Adaptive Load Balancing
(2018)
For a parallel Sparse Matrix Vector Multiply (SpMV) on a multiprocessor, rather simple and efficient work distributions often produce good results. In cases where this is not true, adaptive load balancing can improve the balance and performance. This paper introduces a low overhead framework for adaptive load balancing of parallel SpMV operations. It uses statistical filters to gather relevant runtime performance data and detects an imbalance situation. Three different algorithms were compared that adaptively balance the load with high quality and low overhead. Results show that for sparse matrices, where the adaptive load balancing was enabled, an average speedup of 1.15 (regarding the total execution time) could be achieved with our best algorithm over 4 different matrix formats and two different NUMA systems.
Service robots performing complex tasks involving people in houses or public environments are becoming more and more common, and there is a huge interest from both the research and the industrial point of view. The RoCKIn@Home challenge has been designed to compare and evaluate different approaches and solutions to tasks related to the development of domestic and service robots. RoCKIn@Home competitions have been designed and executed according to the benchmarking methodology developed during the project and received very positive feedbacks from the participating teams. Tasks and functionality benchmarks are explained in detail.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
Systemunterstützung für wissensintensive Geschäftsprozesse – Konzepte und Implementierungsansätze
(2017)
Integrating Bond Graph-Based Fault Diagnosis and Fault Accommodation Through Inverse Simulation
(2017)
Incremental Bond Graphs
(2011)
In the past decade computer models have become very popular in the field of biomechanics due to exponentially increasing computer power. Biomechanical computer models can roughly be subdivided into two groups: multi-body models and numerical models. The theoretical aspects of both modelling strategies will be introduced. However, the focus of this chapter lies on demonstrating the power and versatility of computer models in the field of biomechanics by presenting sophisticated finite element models of human body parts. Special attention is paid to explain the setup of individual models using medical scan data. In order to reach the goal of individualising the model a chain of tools including medical imaging, image acquisition and processing, mesh generation, material modelling and finite element simulation –possibly on parallel computer architectures- becomes necessary. The basic concepts of these tools are described and application results are presented. The chapter ends with a short outlook into the future of computer biomechanics.
The development of mobile robotic systems is a demanding task regarding its complexity, required resources and skills in multiple fields such as software development, artificial intelligence, mechanical design, electrical engineering, signal processing, sensor technology or control theory. This holds true particularly for soccer playing robots, where additional aspects like high dynamics, cooperation and high physical stress have to be dealt with. In robot competitions such as RoboCup, additional skills in the domains of team, project and knowledge management are of importance.
Virtuelle Umgebungen
(2000)
Information reliability and automatic computation are two important aspects that are continuously pushing the Web to be more semantic. Information uploaded to the Web should be reusable and extractable automatically to other applications, platforms, etc. Several tools exist to explicitly markup Web content. The Web services may also have a positive role on the automatic processing of Web contents, especially when they act as flexible and agile agents. However, Web services themselves should be developed with semantics in mind. They should include and provide structured information to facilitate their use, reuse, composition, query, etc. In this chapter, the authors focus on evaluating state-of-the-art semantic aspects and approaches in Web services. Ultimately, this contributes to the goal of Web knowledge management, execution, and transfer.
In Artificial Intelligence, numerous learning paradigms have been developed over the past decades. In most cases of embodied and situated agents, the learning goal for the artificial agent is to „map“ or classify the environment and the objects therein [1, 2], in order to improve navigation or the execution of some other domain-specific task. Dynamic environments and changing tasks still pose a major challenge for robotic learning in real-world domains. In order to intelligently adapt its task strategies, the agent needs cognitive abilities to more deeply understand its environment and the effects of its actions. In order to approach this challenge within an open-ended learning loop, the XPERO project (http://www.xpero.org) explores the paradigm of Learning by Experimentation to increase the robot's conceptual world knowledge autonomously. In this setting, tasks which are selected by an actionselection mechanism are interrupted by a learning loop in those cases where the robot identifies learning as necessary for solving a task or for explaining observations. It is important to note that our approach targets unsupervised learning, since there is no oracle available to the agent, nor does it have access to a reward function providing direct feedback on the quality of its learned model, as e.g. in reinforcement learning approaches. In the following sections we present our framework for integrating autonomous robotic experimentation into such a learning loop. In section 1 we explain the different modules for stimulation and design of experiments and their interaction. In section 2 we describe our implementation of these modules and how we applied them to a real world scenario to gather target-oriented data for learning conceptual knowledge. There we also indicate how the goaloriented data generation enables machine learning algorithms to revise the failed prediction model.
With regard to performance well established SW-only design methodologies proceed by making the initial specification run first, then by enhancing its functionality and finally by optimizing it. When designing Embedded Systems (EbS) this approach is not viable since decisive design decisions like e.g. the estimation of required processing power or the identification of those parts of the specification which need to be delegated to dedicated HW depend on the fastness and fairness of the initial specification. We here propose a sequence of optimization steps embedded into the design flow, which enables a structured way to accelerate a given working EbS specification at different layers. This sequence of accelerations comprises algorithm selection, algorithm transformation, data transformation, implementation optimization and finally HW acceleration. It is analyzed how all acceleration steps are influenced by the specific attributes of the underlying EbS. The overall acceleration procedure is explained and quantified at hand of a real-life industrial example.