Antonin RAFFIN
Antonin RAFFIN
Probably a duplicate of https://github.com/DLR-RM/rl-baselines3-zoo/issues/249, help is welcomed ;) but yes, for now, you need to copy the folder/files.
> I have tried to load the trained agent with these lines You need to use latest SB3 version and I would also recommend using the RL Zoo for loading...
Hello, it is still planned, but it's not a priority. I would like to integrate LN and dropout (https://github.com/DLR-RM/stable-baselines3/issues/1069), and probably progress bar callback first (https://github.com/DLR-RM/rl-baselines3-zoo/pull/287). Actually, I would welcome...
the callback is now available via the `rl_zoo3` package (`rl_zoo3.callbacks`), `VecNormalize` checkpoints are missing though.
> i've tried a bit in the past but that was unsuccessful because jit support was limited and i had issues with saving/loading I gave it another try recently and...
@vwxyzjn ooh nice =) I didn't know you could do that with PyTorch (it looks like they included MPI but for GPU). I will try to have a look later...
>I'm unsure about the two SDE-related calls here and here; I just oriented myself on PPO, calling it before the policy is used for its forward pass (sadly haven't gotten...
>I absolutely understand why you'd want this; it was actually how I first programmed it as well. However, as PPG heavily reuses PPO, it just leads to a lot of...
>Yeah sure I can start working on a pr +1 PR to @janEbert fork or PR to SB3 contrib? (for the latter make sure to read the contributing guide carefully...
@svolokh thanks for the info, would `validate_args=False` solve also the issue? (probably cleaner than deleting the cached probs)