Refine
Document Type
- Conference Object (33)
- Article (5)
- Part of a Book (4)
- Preprint (2)
Year of publication
Keywords
- Robotics (4)
- domestic robots (2)
- robot competitions (2)
- speech understanding (2)
- Affordances (1)
- Benchmarking (1)
- Coloured pointclouds (1)
- Component Models (1)
- Domestic robots (1)
- Domestic service robots (1)
We developed a scene text recognition system with active vision capabilities, namely: auto-focus, adaptive aperture control and auto-zoom. Our localization system is able to delimit text regions in images with complex backgrounds, and is based on an attentional cascade, asymmetric adaboost, decision trees and Gaussian mixture models. We think that text could become a valuable source of semantic information for robots, and we aim to raise interest in it within the robotics community. Moreover, thanks to the robot’s pan-tilt-zoom camera and to the active vision behaviors, the robot can use its affordances to overcome hindrances to the performance of the perceptual task. Detrimental conditions, such as poor illumination, blur, low resolution, etc. are very hard to deal with once an image has been captured and can often be prevented. We evaluated the localization algorithm on a public dataset and one of our own with encouraging results. Furthermore, we offer an interesting experiment in active vision, which makes us consider that active sensing in general should be considered early on when addressing complex perceptual problems in embodied agents.
This work presents a person independent pointing gesture recognition application. It uses simple but effective features for the robust tracking of the head and the hand of the user in an undefined environment. The application is able to detect if the tracking is lost and can be reinitialized automatically. The pointing gesture recognition accuracy is improved by the proposed fingertip detection algorithm and by the detection of the width of the face. The experimental evaluation with eight different subjects shows that the overall average pointing gesture recognition rate of the system for distances up to 250 cm (head to pointing target) is 86.63% (with a distance between objects of 23 cm). Considering just frontal pointing gestures for distances up to 250 cm the gesture recognition rate is 90.97% and for distances up to 194 cm even 95.31%. The average error angle is 7.28◦.
We propose an artificial slime mould model (ASMM) inspired by the plasmodium of Physarum polucephalum (P. polucephalum). ASMM consists of plural slimes, and each slime shares energy via a tube with neighboring slimes. Outer slimes sense their environment and conform to it. Outer slimes periodically transmit information about their surrounding environment via a contraction wave to inner slimes. Thus, ASMM shows how slimes can sense a better environment even if that environment is not adjacent to the slimes. The slimes subsequently can move in the direction of an attractant.
Motivation is a key ingredient for learning: Only if the learner is motivated, successful learning is possible. Educational robotics has proven to be an excellent tool for motivating students at all ages from 8 to 80. Robot competitions for kids, like RoboCupJunior, are instrumental to sustain motivation over a significant period of time. This increases the chances that the learner acquires more in-depth knowledge about the subject area and develops a genuine interest in the field.
Speech understanding is a fundamental feature for many applications focused on human-robot interaction. Although many techniques and several services for speech recognition and natural language understanding have been developed in the last years, specific implementation and validation on domestic service robots have not been performed. In this paper, we describe the implementation and the results of a functional benchmark for speech understanding in service robotics that has been developed and tested in the context of different robot competitions: RoboCup@Home, RoCKIn@Home and within the European Robotics League on Service Robots. Different approaches used by the teams in the competitions are presented and the evaluation results obtained in the competitions are discussed.
Adapting plans to changes in the environment by finding alternatives and taking advantage of opportunities is a common human behavior. The need for such behavior is often rooted in the uncertainty produced by our incomplete knowledge of the environment. While several existing planning approaches deal with such issues, artificial agents still lack the robustness that humans display in accomplishing their tasks. In this work, we address this brittleness by combining Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. The approach allows a domestic service robot to find ways to get a job done by making substitutions. We show how knowledge is modeled, how the reasoning process is used to create a constrained planning problem, and how the system handles cases where plan generation fails due to missing/unavailable objects. The results of the evaluation for two tasks in a domestic service domain show the viability of the approach in finding and making the appropriate goal transformations.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply the APA to 4 types of mazes and confirm that the APA can solve the mazes.
This paper proposes an Artificial Plasmodium Algorithm (APA) mimicked a contraction wave of a plasmodium of physarum polucephalum. Plasmodia can live using the contracion wave in their body to communicate to others and transport a nutriments. In the APA, each plasmodium has two information as the wave information: the direction and food index. We apply APA to a maze solving and route planning of road map.
Humans exhibit flexible and robust behavior in achieving their goals. We make suitable substitutions for objects, actions, or tools to get the job done. When opportunities that would allow us to reach our goals with less effort arise, we often take advantage of them. Robots are not nearly as robust in handling such situations. Enabling a domestic service robot to find ways to get a job done by making substitutions is the goal of our work. In this paper, we highlight the challenges faced in our approach to combine Hierarchical Task Network planning, Description Logics, and the notions of affordances and conceptual similarity. We present open questions in modeling the necessary knowledge, creating planning problems, and enabling the system to handle cases where plan generation fails due to missing/unavailable objects.
Vision-based motion detection, an important skill for an autonomous mobile robot operating in dynamic environments, is particularly challenging when the robot's camera is in motion. In this paper, we use a Fourier-Mellin transform-based image registration method to compensate for camera motion before applying temporal differencing for motion detection. The approach is evaluated online as well as offline on a set of sequences recorded with a Care-O-bot 3, and compared with a feature-based method for image registration. In comparison to the feature-based method, our method performs better both in terms of robustness of the registration and the false discovery rate.
Autonomous industrial robots need to recognize objects robustly in cluttered environments. The use of RGB-D cameras has progressed research in 3D object recognition, but it is still a challenge for textureless objects. We propose a set of features, including the bounding box, mean circle at and radial density distribution, that describe the size, shape and colour of objects. The features are extracted from point clouds of a set of objects and used to train an SVM classifier. Various combinations of the proposed features are tested to determine their inu ence on the recognition rate. Medium-sized objects are recognized with high accuracy whereas small objects have a lower recognition rate. The minimum range and resolution of the cameras are still an issue but are expected to improve as the technology improves.
Presented in this paper is a complete system for robust autonomous navigation in cluttered and dynamic environments. It consists of computationally efficient approaches to the problems of simultaneous localization and mapping, path planning, and motion control, all based on a memory-efficient environment representation. These components have been implemented and integrated with additional components for human-robot interaction and object manipulation on a mobile manipulation platform for service robot applications. The resulting system performed very successfully in the 2008 RoboCup@Home competition.
Application frameworks play a major role in fostering reusability of robotic software solutions. Frameworks foster the reuse of code and design and offer a convenient model of object-oriented extensibility. They provide a powerful concept for providing generic components of well-understood, modular solutions for robotics. Nevertheless, few such frameworks exist in robotics, especially at the application level.
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot.We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation and robust object recognition.
Software development for robots is a knowledgeintensive exercise. To capture this knowledge explicitly and formally in the form of various domain models, roboticists have recently employed model-driven engineering (MDE) approaches. However, these models are merely seen as a way to support humans during the robot's software design process. We argue that the robots themselves should be first-class consumers of this knowledge to autonomously adapt their software to the various and changing run-time requirements induced, for instance, by the robot's tasks or environment. Motivated by knowledge-enabled approaches, we address this problem by employing a graph-based knowledge representation that allows us not only to persistently store domain models, but also to formulate powerful queries for the sake of run time adaptation. We have evaluated our approach in an integrated, real-world system using the neo4j graph database and we report some lessons learned. Further, we show that the graph database imposes only little overhead on the system's overall performance.
We present the design and development of a benchmarking testbed for the Factory of the Future. The testbed as a physical installation enables to study, compare and assess robotics scenarios involving the integration of mobile robots and manipulators with automation equipment, large-scale integration of service robots and industrial robots, cohabitation of robots and humans, and cooperation of multiple robots and/or humans. We also report on the lessons learned of using the testbed in recent robotic competitions.
Robot programming is an interdisciplinary and knowledge-intensive task. All too often, knowledge of the different robotics domains remains implicit. Although, this is slowly changing with the rising interest in explicit knowledge representations through domain-specific languages (DSL), very little is known about the DSL design and development processes themselves. To this end, we present and discuss the reverse-engineered process from the development of our Grasp Domain Definition Language (GDDL), a declarative DSL for the explicit specification of grasping problems. An important finding is that the process comprises similar building blocks as existing software development processes, like the Unified Process.
Grasping objects and using them in a task-oriented manner is challenging for a robot. It requires an understanding of the object, the robot's capabilities, and the task to be executed. We argue that an explicit representation of these domains increases reusability and robustness of the resulting system. We present the declarative Grasp Domain Definition Language (GDDL) which enables the explicit grasp-planner-independent specification of grasping problems. The formal model underlying GDDL enables the definition of formal constraints that are validated during design time and run time. Our approach has been realized on two real robots.
The ability to detect people in domestic and unconstrained environments is crucial for every service robot. The knowledge where people are is required to perform several tasks such as navigation with dynamic obstacle avoidance and human-robot-interaction. In this paper we propose a people detection approach based on 3d data provided by a RGB-D camera. We introduce a novel 3d feature descriptor based on Local Surface Normals (LSN) which is used to learn a classifier in a supervised machine learning manner. In order to increase the systems flexibility and to detect people even under partial occlusion we introduce a top-down/bottom-up segmentation. We deployed the people detection system on a real-world service robot operating at a reasonable frame rate of 5Hz. The experimental results show that our approach is able to detect persons in various poses and motions such as sitting, walking, and running.
The BRICS component model: a model-based development paradigm for complex robotics software systems
(2013)
Because robotic systems get more complex all the time, developers around the world have, during the last decade, created component-based software frameworks (Orocos, Open-RTM, ROS, OPRoS, SmartSoft) to support the development and reuse of "large grained" pieces of robotics software. This paper introduces the BRICS Component Model (BCM) to provide robotics developers with a set of guidelines, metamodels and tools for structuring as much as possible the development of, both, individual components and component-based architectures, using one or more of the aforementioned software frameworks at the same time, without introducing any framework- or application-specific details. The BCM is built upon two complementary paradigms: the "5Cs" (separation of concerns between the development aspects of Computation, Communication, Coordination, Configuration and Composition) and the meta-modeling approach from Model-Driven Engineering.
Deploying a complex robot software architecture on real robot systems and getting it to run reliably is a challenging task. We argue that software deployment decisions should be separated as much as possible from the core development of software functionalities. This will make the developed software more independent of a particular hardware architecture (and thus more reusable) and allow it to be deployed more flexibly on a wider variety of robot platforms. This paper presents a domain-specific language (DSL) which supports this idea and demonstrates how the DSL is used in a model-driven engineering-based development process. A practical example of applying the DSL to the development of an application for the KUKA youBot platform is given.