pytorch-rl icon indicating copy to clipboard operation
pytorch-rl copied to clipboard

Added opensim-rl environment, extended dqn agent for multi-dimensional action space, and a sample configuration and options to config an agent to learn in opensim-rl

Open praveen-palanisamy opened this issue 7 years ago • 0 comments

opensim-rl Is an environment introduced by the NIPS 2017 Learning to run challenge. In this environment, an agent is tasked with learning how to run while avoiding obstacles on the ground. The environment provides a human musculoskeletal model and a physics-based simulation environment which are pretty good. This environment will be useful for training agents that can handle much more complex control tasks even after the NIPS challenge ends. Can be seen as a good alternative or as a complementary environment to Mujoco based environments.

Contributions:

  • [x] Added the opensim-rl environment as a stand-alone environment into the existing framework
  • [x] Updated README to point to the opensim-rl to set up the dependencies
  • [x] Modified the dqn agent to produce a list of actions instead of an action index so that it can be extended to be used for multiple action dimensions >1
  • [x] Added sample factory configurations for the opensim-rl environment
  • [x] Added sample options for training an agent (dqn) in the opensim-rl environment

praveen-palanisamy avatar Dec 02 '17 19:12 praveen-palanisamy