Refine
Departments, institutes and facilities
Document Type
- Conference Object (58)
- Article (10)
- Preprint (4)
- Part of a Book (2)
- Dataset (2)
- Doctoral Thesis (1)
Year of publication
Keywords
- Robotics (5)
- Machine Learning (2)
- knowledge learning (2)
- neural networks (2)
- virtual reality (2)
- visualization (2)
- Agent-oriented software engineering (1)
- Artificial Intelligence (1)
- Assistive robots (1)
- Autonomous Systems (1)
The development of robot control programs is a complex task. Many robots are different in their electrical and mechanical structure which is also reflected in the software. Specific robot software environments support the program development, but are mainly text-based and usually applied by experts in the field with profound knowledge of the target robot. This paper presents a graphical programming environment which aims to ease the development of robot control programs. In contrast to existing graphical robot programming environments, our approach focuses on the composition of parallel action sequences. The developed environment allows to schedule independent robot actions on parallel execution lines and provides mechanism to avoid side-effects of parallel actions. The developed environment is platform-independent and based on the model-driven paradigm. The feasibility of our approach is shown by the application of the sequencer to a simulated service robot and a robot for educational purpose.
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment
(2015)
The development of advanced robotic systems is challenging as expertise from multiple domains needs to be integrated conceptually and technically. Model-driven engineering promises an efficient and flexible approach for developing robotics applications that copes with this challenge. Domain-specific modeling allows to describe robotics concerns with concepts and notations closer to the respective problem domain. This raises the level of abstraction and results in models that are easier to understand and validate. Furthermore, model-driven engineering allows to increase the level of automation, e.g. through code generation, and to bridge the gap between modeling and implementation. The anticipated results are improved efficiency and quality of the robotics systems engineering process. Within this contribution, we survey the available literature on domain-specific modeling and languages that target core robotics concerns. In total 137 publications were identified that comply with a set of defined criteria, which we consider essential for contributions in this field. With the presented survey, we provide an overview on the state-of-the-art of domain-specific modeling approaches in robotics. The surveyed publications are investigated from the perspective of users and developers of model-based approaches in robotics along a set of quantitative and qualitative research questions. The presented quantitative analysis clearly indicates the rising popularity of applying domain-specific modeling approaches to robotics in the academic community. Beyond this statistical analysis, we map the selected publications to a defined set of robotics subdomains and typical development phases in robotic systems engineering as reference for potential users. Furthermore, we analyze these contributions from a language engineering viewpoint and discuss aspects such as the methods and tools used for their implementation as well as their documentation status, platform integration, typical use cases and the evaluation strategies used for validation of the proposed approaches. Finally, we conclude with recommendations for discussion in the model-driven engineering and robotics community based on the insights gained in this survey.
We are happy to present you the special issue on Best Practice in Robot Software Development of the Journal on Software Engineering for Robotics! The spark for this special issue came during the eighth workshop on Software Development and Integration in Robotics (SDIR) at the 2013 IEEE International Conference on Robotics and Automation. The workshop focused on Robot Software Architectures, and the fruitful discussions made it clear that the design, development, and deployment of robot software is always an interplay between competing aspects. These are often couched in antagonistic pairs, such as dependability versus performance, and prominently include quality attributes as well as functional, nonfunctional, and application requirements.
RPSL meets lightning: A model-based approach to design space exploration of robot perception systems
(2017)
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
As robots are becoming ubiquitous and more capable, the need for introducing solid robot software development methods is pressing to increase robots' task spectrum. This thesis is concerned with improving software engineering of robot perception systems. The presented research employs a model-based approach to provide the means to represent knowledge about robotics software. The thesis is divided into three parts, namely research on the specification, deployment and adaptation of robot perception systems.
Few mobile robot developers already test their software on simulated robots in virtual environments or sceneries. However, the majority still shy away from simulation-based test campaigns because it remains challenging to specify and execute suitable testing scenarios, that is, models of the environment and the robots’ tasks. Through developer interviews, we identified that managing the enormous variability of testing scenarios is a major barrier to the application of simulation-based testing in robotics. Furthermore, traditional CAD or 3D-modelling tools such as SolidWorks, 3ds Max, or Blender are not suitable for specifying sceneries that vary significantly and serve different testing objectives. For some testing campaigns, it is required that the scenery replicates the dynamic (e.g., opening doors) and static features of real-world environments, whereas for others, simplified scenery is sufficient. Similarly, the task and mission specifications used for simulation-based testing range from simple point-to-point navigation tasks to more elaborate tasks that require advanced deliberation and decision-making. We propose the concept of composable and executable scenarios and associated tooling to support developers in specifying, reusing, and executing scenarios for the simulation-based testing of robotic systems. Our approach differs from traditional approaches in that it offers a means of creating scenarios that allow the addition of new semantics (e.g., dynamic elements such as doors or varying task specifications) to existing models without altering them. Thus, we can systematically construct richer scenarios that remain manageable. We evaluated our approach in a small simulation-based testing campaign, with scenarios defined around the navigation stack of a mobile robot. The scenarios gradually increased in complexity, composing new features into the scenery of previous scenarios. Our evaluation demonstrated how our approach can facilitate the reuse of models and revealed the presence of errors in the configuration of the publicly available navigation stack of our SUT, which had gone unnoticed despite its frequent use.
This dataset contains multimodal data recorded on robots performing robot-to-human and human-to-robot handovers. The intended application of the dataset is to develop and benchmark methods for failure detection during the handovers. Thus the trials in the dataset contain both successful and failed handover actions. For a more detailed description of the dataset, please see the included Datasheet.
When performing manipulation-based activities such as picking objects, a mobile robot needs to position its base at a location that supports successful execution. To address this problem, prominent approaches typically rely on costly grasp planners to provide grasp poses for a target object, which are then are then analysed to identify the best robot placements for achieving each grasp pose. In this paper, we propose instead to first find robot placements that would not result in collision with the environment and from where picking up the object is feasible, then evaluate them to find the best placement candidate. Our approach takes into account the robot's reachability, as well as RGB-D images and occupancy grid maps of the environment for identifying suitable robot poses. The proposed algorithm is embedded in a service robotic workflow, in which a person points to select the target object for grasping. We evaluate our approach with a series of grasping experiments, against an existing baseline implementation that sends the robot to a fixed navigation goal. The experimental results show how the approach allows the robot to grasp the target object from locations that are very challenging to the baseline implementation.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
The RoCKIn@Work Challenge
(2014)
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
We present a universal modular robot architecture. A robot consists of the following intelligent modules: central control unit (CCU), drive, actuators, a vision unit and sensor input unit. Software and hardware of the robot fit into this structure. We define generic interface protocols between these units. If the robot has to solve a new application and is equipped with a different drive, new actuators and different sensors, only the program for the new application has to be loaded into the CCU. The interfaces to the drive, the vision unit and the other sensors are plug-and-play interfaces. The only constraint for the CCU-program is the set of commands for the actuators.