rl-experiments
rl-experiments copied to clipboard
Keeping track of RL experiments
It seems some of the parameters are outdated as the ray library updated. Can you specify the main library version somewhere?
Hi, Why is the dueling and double_q are both set to false [here](https://github.com/ray-project/rl-experiments/blob/b848a10c08da302febd860270e90672addc42ef5/atari-dqn/dueling-ddqn.yaml#L11)? Seems this is a redundant/old file as there is another file named `duel-ddqn.yaml` in the same folder...
Hi, can anyone explain the difference between train_batch_size and sample_batch_size, used [here](https://github.com/ray-project/rl-experiments/blob/b848a10c08da302febd860270e90672addc42ef5/atari-dqn/dist-dqn.yaml#L22)? I looked up this key for its explanation in [Common Parameters](https://docs.ray.io/en/master/rllib-training.html#common-parameters) and DQN [trainers](https://github.com/ray-project/ray/blob/master/rllib/agents/dqn/dqn.py) but couldn't fine one....
This seems to be a copy of `basic-dqn.yaml`, with the real DDQN config living in `duel-ddqn.yaml`.
This repo is really useful and I wonder whether there is any update on this? The tuned hyper-params save lots of our time! Thanks a lot!