stable-baselines icon indicating copy to clipboard operation
stable-baselines copied to clipboard

[PPO2] problems resuming training

Open k0rean opened this issue 4 years ago • 5 comments

I'm trying to resume the model training and I'm getting some strange results. Using SubProcVecEnv and VecNormalize on a custom environment:

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv, VecNormalize
from stable_baselines import PPO2
import os
...

env = SubprocVecEnv([init_env(i) for i in range(n_envs)])

if os.path.exists("ppo/model.zip"): # resume training
    norm_env = VecNormalize.load("ppo/norm_env.p", env)
    model = PPO2.load("ppo/model.zip", norm_env, reset_num_timesteps=False, verbose=0, tensorboard_log="./ppo/logs")
else: # new model
    norm_env = VecNormalize(env, norm_reward=False)
    model = PPO2(MlpPolicy, norm_env, verbose=0, tensorboard_log="./ppo/logs")

model.learn(total_timesteps=2500000)
model.save("ppo/model.zip")
norm_env.save("ppo/norm_env.p")
env.close()

image

Firstly, don't know why it doesn't continue the current tensorboard training curve if I passed reset_num_timesteps=False. Already updated tensorboard to the last version and verified the same behaviour. But the bigger problem is the discontinuity verified between the two runs. Already tried a single run with more timesteps (10e6) and got a continuous improving curve but without reaching a reward of 2.5 as the 2nd run got in this case. The 2nd run reached a bigger reward almost in the beginning but didn't improve anymore. Am I doing some mistake loading the previous model?

System Info

  • Library installed using pip
  • Python version 3.6.9
  • Tensorflow version 1.14.0

k0rean avatar Apr 03 '20 14:04 k0rean

Related: https://github.com/hill-a/stable-baselines/issues/301 https://github.com/hill-a/stable-baselines/issues/692 for continuing the tensorboard log, this is a known plotting bug (I need to find the issue again)

Also, you should use a Monitor wrapper to have access to the original reward, so you can compare runs. The plotted reward is the normalized one, you cannot compare run with it.

Did you try using the rl zoo?

araffin avatar Apr 03 '20 14:04 araffin

I looked at that issues but didn't find the solution. That's not critical anyway. I'm not normalizing rewards with VecNormalize, only the observations. So that's not the problem for the discontinuity. No I didn't , why?

k0rean avatar Apr 03 '20 14:04 k0rean

@k0rean any solution to this problem?

njanirudh avatar Mar 12 '21 21:03 njanirudh

@njanirudh I do not have direct answer, but if possible, try out stable-baselines3 and see if it helps with your issues. It is more actively maintained so we can discuss and fix bugs there :)

Miffyli avatar Mar 12 '21 21:03 Miffyli

https://github.com/hill-a/stable-baselines/issues/1192

rambo1111 avatar Feb 03 '24 19:02 rambo1111