Refine
Document Type
- Conference Object (25)
- Article (3)
- Part of a Book (3)
- Preprint (1)
Year of publication
Keywords
- Robotics (3)
- domestic robots (2)
- robot competitions (2)
- Affordances (1)
- Benchmarking (1)
- Coloured pointclouds (1)
- Domestic robots (1)
- Domestic service robots (1)
- Human robot interaction (1)
- Human-Robot Interaction (1)
We developed a scene text recognition system with active vision capabilities, namely: auto-focus, adaptive aperture control and auto-zoom. Our localization system is able to delimit text regions in images with complex backgrounds, and is based on an attentional cascade, asymmetric adaboost, decision trees and Gaussian mixture models. We think that text could become a valuable source of semantic information for robots, and we aim to raise interest in it within the robotics community. Moreover, thanks to the robot’s pan-tilt-zoom camera and to the active vision behaviors, the robot can use its affordances to overcome hindrances to the performance of the perceptual task. Detrimental conditions, such as poor illumination, blur, low resolution, etc. are very hard to deal with once an image has been captured and can often be prevented. We evaluated the localization algorithm on a public dataset and one of our own with encouraging results. Furthermore, we offer an interesting experiment in active vision, which makes us consider that active sensing in general should be considered early on when addressing complex perceptual problems in embodied agents.
This work presents a person independent pointing gesture recognition application. It uses simple but effective features for the robust tracking of the head and the hand of the user in an undefined environment. The application is able to detect if the tracking is lost and can be reinitialized automatically. The pointing gesture recognition accuracy is improved by the proposed fingertip detection algorithm and by the detection of the width of the face. The experimental evaluation with eight different subjects shows that the overall average pointing gesture recognition rate of the system for distances up to 250 cm (head to pointing target) is 86.63% (with a distance between objects of 23 cm). Considering just frontal pointing gestures for distances up to 250 cm the gesture recognition rate is 90.97% and for distances up to 194 cm even 95.31%. The average error angle is 7.28◦.
We propose an artificial slime mould model (ASMM) inspired by the plasmodium of Physarum polucephalum (P. polucephalum). ASMM consists of plural slimes, and each slime shares energy via a tube with neighboring slimes. Outer slimes sense their environment and conform to it. Outer slimes periodically transmit information about their surrounding environment via a contraction wave to inner slimes. Thus, ASMM shows how slimes can sense a better environment even if that environment is not adjacent to the slimes. The slimes subsequently can move in the direction of an attractant.
Service robots performing complex tasks involving people in houses or public environments are becoming more and more common, and there is a huge interest from both the research and the industrial point of view. The RoCKIn@Home challenge has been designed to compare and evaluate different approaches and solutions to tasks related to the development of domestic and service robots. RoCKIn@Home competitions have been designed and executed according to the benchmarking methodology developed during the project and received very positive feedbacks from the participating teams. Tasks and functionality benchmarks are explained in detail.
Motivation is a key ingredient for learning: Only if the learner is motivated, successful learning is possible. Educational robotics has proven to be an excellent tool for motivating students at all ages from 8 to 80. Robot competitions for kids, like RoboCupJunior, are instrumental to sustain motivation over a significant period of time. This increases the chances that the learner acquires more in-depth knowledge about the subject area and develops a genuine interest in the field.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply the APA to 4 types of mazes and confirm that the APA can solve the mazes.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply APA to a maze solving and route planning of road map.
Vision-based motion detection, an important skill for an autonomous mobile robot operating in dynamic environments, is particularly challenging when the robot's camera is in motion. In this paper, we use a Fourier-Mellin transform-based image registration method to compensate for camera motion before applying temporal differencing for motion detection. The approach is evaluated online as well as offline on a set of sequences recorded with a Care-O-bot 3, and compared with a feature-based method for image registration. In comparison to the feature-based method, our method performs better both in terms of robustness of the registration and the false discovery rate.
Autonomous industrial robots need to recognize objects robustly in cluttered environments. The use of RGB-D cameras has progressed research in 3D object recognition, but it is still a challenge for textureless objects. We propose a set of features, including the bounding box, mean circle at and radial density distribution, that describe the size, shape and colour of objects. The features are extracted from point clouds of a set of objects and used to train an SVM classifier. Various combinations of the proposed features are tested to determine their inu ence on the recognition rate. Medium-sized objects are recognized with high accuracy whereas small objects have a lower recognition rate. The minimum range and resolution of the cameras are still an issue but are expected to improve as the technology improves.
Presented in this paper is a complete system for robust autonomous navigation in cluttered and dynamic environments. It consists of computationally efficient approaches to the problems of simultaneous localization and mapping, path planning, and motion control, all based on a memory-efficient environment representation. These components have been implemented and integrated with additional components for human-robot interaction and object manipulation on a mobile manipulation platform for service robot applications. The resulting system performed very successfully in the 2008 RoboCup@Home competition.
To perform a wide range of tasks service robots need to robustly extract knowledge about the world from the data perceived through the robot's sensors even in the presence of varying context-conditions. This makes the design and development of robot perception architectures a challenging exercise. In this paper we propose a robot perception architecture which enables to select and execute at runtime different perception graphs based on monitored context changes. To achieve this the architecture is structured as a feedback loop and contains a repository of different perception graph configurations suitable for various context conditions.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
Software development for robots is a knowledgeintensive exercise. To capture this knowledge explicitly and formally in the form of various domain models, roboticists have recently employed model-driven engineering (MDE) approaches. However, these models are merely seen as a way to support humans during the robot's software design process. We argue that the robots themselves should be first-class consumers of this knowledge to autonomously adapt their software to the various and changing run-time requirements induced, for instance, by the robot's tasks or environment. Motivated by knowledge-enabled approaches, we address this problem by employing a graph-based knowledge representation that allows us not only to persistently store domain models, but also to formulate powerful queries for the sake of run time adaptation. We have evaluated our approach in an integrated, real-world system using the neo4j graph database and we report some lessons learned. Further, we show that the graph database imposes only little overhead on the system's overall performance.
Service robots become increasingly capable and deliver a broader spectrum of services which all require a wide range of perceptual capabilities. These capabilities must cope with dynamically changing requirements which make the design and implementation of a robot perception architecture a complex and tedious exercise which is prone to error. We suggest to specify the integral parts of robot perception architectures using explicit models, which allows to easily configure, modify, and validate them. The paper presents the domain-specific language RPSL, some examples of its application, the current state of implementation and some validation experiments.
We present the design and development of a benchmarking testbed for the Factory of the Future. The testbed as a physical installation enables to study, compare and assess robotics scenarios involving the integration of mobile robots and manipulators with automation equipment, large-scale integration of service robots and industrial robots, cohabitation of robots and humans, and cooperation of multiple robots and/or humans. We also report on the lessons learned of using the testbed in recent robotic competitions.
Robot programming is an interdisciplinary and knowledge-intensive task. All too often, knowledge of the different robotics domains remains implicit. Although, this is slowly changing with the rising interest in explicit knowledge representations through domain-specific languages (DSL), very little is known about the DSL design and development processes themselves. To this end, we present and discuss the reverse-engineered process from the development of our Grasp Domain Definition Language (GDDL), a declarative DSL for the explicit specification of grasping problems. An important finding is that the process comprises similar building blocks as existing software development processes, like the Unified Process.
Grasping objects and using them in a task-oriented manner is challenging for a robot. It requires an understanding of the object, the robot's capabilities, and the task to be executed. We argue that an explicit representation of these domains increases reusability and robustness of the resulting system. We present the declarative Grasp Domain Definition Language (GDDL) which enables the explicit grasp-planner-independent specification of grasping problems. The formal model underlying GDDL enables the definition of formal constraints that are validated during design time and run time. Our approach has been realized on two real robots.
Most of the recent robot software frameworks follow a component-oriented development approach. They allow developers to compose a distributed application from a set of interacting components. Though these frameworks provide rich functionality, often they fail to cope with non-functional aspects (e.g., network scalability, predictability of system behavior) involved in system design, especially in distributed settings. This research sets out to address aforementioned quality attributes by introducing a pragmatic model, Protocol Stack View (PSV), for the analysis of distributed robotic software. The model relies on the fact that a distributed software can be viewed in terms of three main elements: components, ports and connectors. It specifically focuses on structure and semantics of software connectors on the implementation level. To prove effectiveness and usefulness of PSV a set of experiments were conducted to analyze scalability and to determine the configurable elements that affect it. The experiments are based on the comparison of communication infrastructure provided by two existing software packages, namely ROS and ZeroMQ.
The ability to detect people in domestic and unconstrained environments is crucial for every service robot. The knowledge where people are is required to perform several tasks such as navigation with dynamic obstacle avoidance and human-robot-interaction. In this paper we propose a people detection approach based on 3d data provided by a RGB-D camera. We introduce a novel 3d feature descriptor based on Local Surface Normals (LSN) which is used to learn a classifier in a supervised machine learning manner. In order to increase the systems flexibility and to detect people even under partial occlusion we introduce a top-down/bottom-up segmentation. We deployed the people detection system on a real-world service robot operating at a reasonable frame rate of 5Hz. The experimental results show that our approach is able to detect persons in various poses and motions such as sitting, walking, and running.
Deploying a complex robot software architecture on real robot systems and getting it to run reliably is a challenging task. We argue that software deployment decisions should be separated as much as possible from the core development of software functionalities. This will make the developed software more independent of a particular hardware architecture (and thus more reusable) and allow it to be deployed more flexibly on a wider variety of robot platforms. This paper presents a domain-specific language (DSL) which supports this idea and demonstrates how the DSL is used in a model-driven engineering-based development process. A practical example of applying the DSL to the development of an application for the KUKA youBot platform is given.
In recent years increased research activity in robotics has led to advancements in both hardware and software technologies. More complex hardware required increasingly sophisticated software infrastructures to operate it, and led to the development of several different robotics software frameworks. The driving forces behind the development of such frameworks is to cope with the heterogeneous and distributed nature of robotics software applications and to exploit more advanced software technologies in the robotics domain. So far, though, there has been not much effort to foster cooperation among these frameworks, neither on conceptual nor on implementation levels. Our research aims to analyse existing robotics software frameworks in order to identify possible levels of interoperability among them. The problem is tackled by determining a set of software concepts, in our case centering around component-based software development, which are used to determine a set of common architectural elements in an analysis of existing robotics software frameworks. The result is that these common elements can be used as interoperability points among software frameworks. Exploiting such interoperability gives developers new architectural design choices and fosters reuse of functionality already developed, albeit in another framework. It is also highly relevant for the development of new robotics software frameworks, as it opens smoother migration paths for developers to switch from one framework to another.
RoCKIn@Work was focused on benchmarks in the domain of industrial robots. Both task and functionality benchmarks were derived from real world applications. All of them were part of a bigger user story painting the picture of a scaled down real world factory scenario. Elements used to build the testbed were chosen from common materials in modern manufacturing environments. Networked devices, machines controllable through a central software component, were also part of the testbed and introduced a dynamic component to the task benchmarks. Strict guidelines on data logging were imposed on participating teams to ensure gathered data could be automatically evaluated. This also had the positive effect that teams were made aware of the importance of data logging, not only during a competition but also during research as useful utility in their own laboratory. Tasks and functionality benchmarks are explained in detail, starting with their use case in industry, further detailing their execution and providing information on scoring and ranking mechanisms for the specific benchmark.
This paper describes activities that promote robot competitions in Europe, using and expanding RoboCup concepts and best practices, through two projects funded by the European Commission under its FP7 and Horizon2020 programmes. The RoCKIn project ended in December 2015 and its goal was to speed up the progress towards smarter robots through scientific competitions. Two challenges have been selected for the competitions due to their high relevance and impact on Europes societal and industrial needs: domestic service robots (RoCKIn@Home) and innovative robot applications in industry (RoCKIn@Work). RoCKIn extended the corresponding RoboCup leagues by introducing new and prevailing research topics, such as networking mobile robots with sensors and actuators spread over the environment, in addition to specifying objective scoring and benchmark criteria and methods to assess progress. The European Robotics League (ERL) started recently and includes indoor competitions related to domestic and industrial robots, extending RoCKIn's rulebooks. Teams participating in the ERL must compete in at least two tournaments per year, which can take place either in a certified test bed (i.e., based on the rulebooks) located in a European laboratory, or as part of a major robot competition event. The scores accumulated by the teams in their best two participations are used to rank them over an year.