Volltext-Downloads (blau) und Frontdoor-Views (grau)

Can Reinforcement Learning for Continuous Control Generalize Across Physics Engines?

  • Reinforcement learning (RL) algorithms should learn as much as possible about the environment but not the properties of the physics engines that generate the environment. There are multiple algorithms that solve the task in a physics engine based environment but there is no work done so far to understand if the RL algorithms can generalize across physics engines. In this work, we compare the generalization performance of various deep reinforcement learning algorithms on a variety of control tasks. Our results show that MuJoCo is the best engine to transfer the learning to other engines. On the other hand, none of the algorithms generalize when trained on PyBullet. We also found out that various algorithms have a promising generalizability if the effect of random seeds can be minimized on their performance.

Export metadata

Additional Services

Search Google Scholar Check availability


Show usage statistics
Document Type:Preprint
Author:Aaqib Parvez Mohammed, Matias Valdenegro-Toro
Number of pages:10
ArXiv Id:http://arxiv.org/abs/2010.14444
Date of first publication:2020/10/27
Departments, institutes and facilities:Fachbereich Informatik
Dewey Decimal Classification (DDC):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Entry in this database:2020/11/03