Question about Atari env versions
Hi @takuseno, I had a question about the default versions used to create Atari envs through the method get_atari() in datasets.py (https://github.com/takuseno/d3rlpy/blob/1921f7163416fb698f0289b230fe8b88f47112d5/d3rlpy/datasets.py#L127).
This leads to envs being created with version v0 (because sticky_action=False), and i get the warning that v0 envs are deprecated and that I should use v4.
- should I add sticky_action=True to the call, so that I get v4? Another method
get_atari_transitionsin the same file seems to call the v4 version instead I'm not sure why. I imagine there's other bug fixes in the v4 envs that lead to more stable evaluation/rewards etc which is why the v0 got deprecated? - I'm unsure if doing this will make the env different from the Atari offline dataset I train a CQL model on, since it's possible that a model could learn well on the data, but fail due to the sticky_action setting in v4.
Thanks! Gunshi
@gunshi Hi, This is caveat because recently atari_py and gym made changes around this. My recommendation is that you can downgrade gym version to somewhere around 0.17.2 (probably, atari_py too?).
Hi @takuseno, thanks so much for your prompt response! Could you elaborate or point me to a link that explains the following (which is what I'm interested in): Is there any issue with continuing to use the v0 version of atari envs when say trying to reproduce the CQL results, or are the new offline RL algos/most RL algos now trained with v4 because of some bugs in v0. I'm ok with the current setup, just wanted to make sure I'm not degrading learning performance of CQL (and my variant that I'm building on top of CQL) by using v0 in a non-trivial way. I saw that the new version of gym wasn't supporting many atari games, but I'm on version 0.19 and I can load the envs, it's just the env versions I'm concerned with. Best, Gunshi
I believe this is because of mostly about API changes. I'm not aware of any bugs in v0.