Optimize performance by changing to tilemaps
~~These optimizations depend on this PR: https://github.com/Baekalfen/PyBoy/pull/250~~
Edit: This PR requires PyBoy 1.6.0
Awesome! This breaks the current pretrained model though right? I think this should be behind a flag or maybe as a second environment file, so the functionality demoed in the video stays intact.
Sorry, yes. It does break the current model. Also I have only tested that it is programmatically sound, not that the RL is still working. It might require some additional tweaking.
I'd probably put this as work-in-progress until you're ready (if you want to) to retrain with this method. Probably waiting with the retraining until we've found if there are other improvements to make
Cool. If you want to put this change into a copy of the red_gym_env file (maybe something like red_gym_tilemap_env), I could merge this so it lives in the repo. Otherwise I'll just leave it open for now.
Let's leave it open for now. I might want to replace the knn system with a simpler coordinate check. I think that could provide additional performance increase.
EDIT:
But yes, we can rename it to keep both versions if you'd like.
This will definitely need changes to the feature encoder - currently, it's putting a value in the range of 0-383 into a uint8, so it's overflowing and some tile indices overlap. Probably best to use a custom CNN where it passes through an embedding first to map from the integer indices to a arbitrary learnable vectors.
Let me just say thanks, this is a great work!
In training (Intel 13th/7900/32gb) can report: System memory usage dropped 20-30%, GPU is up from 2% to 7% utilization and finally I'm seeing stat results in the first iteration.
Just adding, that the PokemonGen1 game wrapper has been deployed in PyBoy 1.6.0
Oh cool, I'm also cleaning up the env file and making improvements in parallel https://github.com/PufferAI/PufferLib/blob/0.5-cleanup/pufferlib/environments/pokemon_red.py
Going to port it to CleanRL, but I may be able to help rerun some experiments for free in the meanwhile
@Baekalfen, @AlexGreason I've been experimenting with this tile implementation to train a different game and getting much better results using 'MlpPolicy' over 'CnnPolicy'. My understanding is that 'CnnPolicy' is better for raw image processing (pixel level) but 'MlpPolicy' is better when using tiles. This may also allow reducing the output_shape below 36x36 which seems to be a limitation of CnnPolicy but I haven't tested this yet.
@Baekalfen, @AlexGreason I've been experimenting with this tile implementation to train a different game and getting much better results using 'MlpPolicy' over 'CnnPolicy'. My understanding is that 'CnnPolicy' is better for raw image processing (pixel level) but 'MlpPolicy' is better when using tiles. This may also allow reducing the output_shape below 36x36 which seems to be a limitation of CnnPolicy but I haven't tested this yet.
Very interesting. How's performance? I'd assume reducing the input size to 18, 20 (the native tilemap size) would help too.
@Baekalfen, @AlexGreason I've been experimenting with this tile implementation to train a different game and getting much better results using 'MlpPolicy' over 'CnnPolicy'. My understanding is that 'CnnPolicy' is better for raw image processing (pixel level) but 'MlpPolicy' is better when using tiles. This may also allow reducing the output_shape below 36x36 which seems to be a limitation of CnnPolicy but I haven't tested this yet.
Very interesting. How's performance? I'd assume reducing the input size to
18, 20(the native tilemap size) would help too.
Very good! Got roughly 20% FPS increase during training with MlpPolicy, output_shape = (18, 20, 3) and vec_dim = 1080 (to fix update_frame_knn_index runtime error). I'm running on 12-core 3900X with 32GB RAM.