Refine
H-BRS Bibliography
- yes (6)
Departments, institutes and facilities
Document Type
- Conference Object (4)
- Preprint (2)
Language
- English (6)
Keywords
This paper presents the b-it-bots RoboCup@Work team and its current hardware and functional architecture for the KUKA youBot robot. We describe the underlying software framework and the developed capabilities required for operating in industrial environments including features such as reliable and precise navigation, flexible manipulation, robust object recognition and task planning. New developments include an approach to grasp vertical objects, placement of objects by considering the empty space on a workstation, and the process of porting our code to ROS2.
In Robot-Assisted Therapy for children with Autism Spectrum Disorder, the therapists’ workload is increased due to the necessity of controlling the robot manually. The solution for this problem is to increase the level of autonomy of the system, namely the robot should interpret and adapt to the behaviour of the child under therapy. The problem that we are adressing is to develop a behaviour model that will be used for the robot decision-making process, which will learn how to adequately react to certain child reactions. We propose the use of the reinforcement learning technique for this task, where feedback for learning is obtained from the therapist’s evaluation of a robot’s behaviour.
In robot-assisted therapy for individuals with Autism Spectrum Disorder, the workload of therapists during a therapeutic session is increased if they have to control the robot manually. To allow therapists to focus on the interaction with the person instead, the robot should be more autonomous, namely it should be able to interpret the person's state and continuously adapt its actions according to their behaviour. In this paper, we develop a personalised robot behaviour model that can be used in the robot decision-making process during an activity; this behaviour model is trained with the help of a user model that has been learned from real interaction data. We use Q-learning for this task, such that the results demonstrate that the policy requires about 10,000 iterations to converge. We thus investigate policy transfer for improving the convergence speed; we show that this is a feasible solution, but an inappropriate initial policy can lead to a suboptimal final return.