SenseAct icon indicating copy to clipboard operation
SenseAct copied to clipboard

DDPG + HER to replace TRPO

Open hai-h-nguyen opened this issue 5 years ago • 7 comments

I want to replace the TRPO with DDPG + HER and am having difficulties. The combination only works with a task that is registered with Gym. How did TRPO avoid that?

hai-h-nguyen avatar Feb 21 '19 18:02 hai-h-nguyen

I'm a little unclear about the question. Are you trying one of our examples? If not, is that a simulated task?

For all our real-world robot tasks, we do inherit gym.core.Env. For example, with the UR5 arm,

  • ReacherEnv inherits the gym core env (link)
  • The observation and actions space are defined as gym Box objects (link)

As for registering the env, it's needed only when you'd like to use env = gym.make("custom_env_name"). We did that with our DoubleInvertedPendulumEnv. (link)

I'm assuming that you're trying to use the baselines implementation of DDPG. Let me know if you have any other questions.

gauthamvasan avatar Feb 21 '19 20:02 gauthamvasan

I have a different robot but I modified the code so that it can work. However, I want to try a different algorithm (DDPG + HER) as it should be faster than TRPO. HER uses gym make env function so I think I can follow your suggestion.

Another question, my code has a problem when running for a number of hours or so. The thread _sensor_handler and actuator_handler stop running after a while (even it was running fine after one hour or so). What might be the possible reasons for that?

hai-h-nguyen avatar Feb 21 '19 20:02 hai-h-nguyen

This is a typical error:

WARNING:root:Agent has over-run its allocated dt, it has been 0.28047633171081543 since the last observation, 0.24047633171081542 more than allowed Resetting Reset done Resetting Reset done Resetting Reset done Resetting Reset done Resetting

It just keeps looping between these. As the commands are not sent to the robot (the actuator_handler thread stops), the robot does not move at all. I also checked that the sensor_handler also stops running.

hai-h-nguyen avatar Feb 21 '19 20:02 hai-h-nguyen

Is it possible for you to share some code snippets or elaborate on what you are trying to do? I have seen such errors when python multi-processing code was setup incorrectly.

gauthamvasan avatar Feb 22 '19 15:02 gauthamvasan

Thanks! Please look at the code at https://github.com/hhn1n15/SenseAct_Aubo. Basically right now I am trying to replicate your results (using TRPO) with a new robot (Aubo robot). I added new device aubo, created an aubo_reacher (based on ur_reacher). Most of the code stays the same.

hai-h-nguyen avatar Feb 22 '19 17:02 hai-h-nguyen

The dt may overrun if expensive learning updates are done sequentially among many other reasons. It is not that bothersome to have it say once in every few minutes. However, if this happens more often, two options can be to compute the update more efficiently using powerful computers or make the learning updates asynchronously using a different process.

Are the handlers stopping even when you are running TRPO or PPO?

I suggest getting it learning first with TRPO or PPO using the example script before moving to HER. Getting effective learning with a new robot is no trivial job and would be glad to see this working!

armahmood avatar Mar 02 '19 23:03 armahmood

I haven't tried DDPG+HER yet. The two handlers stops even with the original code using TRPO. Actually, the communicator stops making the two threads stop.

hai-h-nguyen avatar Mar 02 '19 23:03 hai-h-nguyen