pfrl
pfrl copied to clipboard
PFRL: a PyTorch-based deep reinforcement learning library
- [ ] Confirm DDPG results do not change - [ ] Confirm TD3 results do not change - [ ] Confirm SoftActorCritic results do not change TODO: Memory output...
As per this discussion: https://github.com/pfnet/pfrl/pull/195#issuecomment-2082102663 I have switched many of the `variable.detach().cpu().numpy()` with `variable.item()` in our DDPG-TD3-SAC family of agents. I have ran the training scripts for these agents for...
This Pull request is a work in progress branch for supporting gymnasium in lieu of gym. Todo/testing items: - [ ] Check all example files run - [ ] Add...
The tests use `scipy` and `optuna`, so I include them in the `test_requires`. I have linked in comments to the places where they are used in the tests.
I encountered following error: `hogehoge\pfrl-master\examples\atari>python train_ppo_ale.py` ``` Output files are saved in results\20230128T163113.362308 Traceback (most recent call last): File "train_ppo_ale.py", line 334, in main() File "train_ppo_ale.py", line 208, in main...
`/usr/local/lib/python3.10/dist-packages/pfrl/replay_buffer.py:180: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered...
when I use the command as you showed for us int the folder of "examples/atari/reproduction", error occurred in all those four examples as bellow:  the env made by gym.make()...
can you add some examples of commands how to use the repo?
I am new to such field, when I was trying out pfrl for my own environment, such prompt was returned, I will be glad to elaborate but I have no...