stable-baselines3
stable-baselines3 copied to clipboard
Fix memory leak in base_class.py
Description
Loading the data return value is not necessary since it is unused. Loading the data causes a memory leak through the ep_info_buffer variable. I found this while loading a PPO learner from storage on a multi-GPU system since the ep_info_buffer is loaded to the memory location it was on while it was saved to disk, instead of the target loading location, and is then not cleaned up.
Motivation and Context
- [ ] I have raised an issue to propose this change (required for new features and bug fixes)
Types of changes
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation (update in the documentation)
Checklist
- [x] I've read the CONTRIBUTION guide (required)
- [x] I have updated the changelog accordingly (required).
- [ ] My change requires a change to the documentation.
- [ ] I have updated the tests accordingly (required for a bug fix or a new feature).
- [ ] I have updated the documentation accordingly.
- [ ] I have opened an associated PR on the SB3-Contrib repository (if necessary)
- [ ] I have opened an associated PR on the RL-Zoo3 repository (if necessary)
- [x] I have reformatted the code using
make format(required) - [x] I have checked the codestyle using
make check-codestyleandmake lint(required) - [x] I have ensured
make pytestandmake typeboth pass. (required) - [x] I have checked that the documentation builds using
make doc(required)
Note: You can run most of the checks using make commit-checks.
Note: we are using a maximum length of 127 characters per line
Loading the data causes a memory leak through the ep_info_buffer variable.
Do you have a minimal example to reproduce/track this behavior? also, how big is the leak?
Do you have a minimal example to reproduce/track this behavior? also, how big is the leak?
Hi, I managed to get this working example:
import gymnasium as gym
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
import torch
# Parallel environments
train=True
vec_env = make_vec_env("CartPole-v1", n_envs=2)
if train:
model = PPO("MlpPolicy", vec_env, verbose=1, device="cuda:2")
model.learn(total_timesteps=1)
model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")])
model.save("ppo_cartpole")
del model
model = PPO("MlpPolicy", vec_env, verbose=1, device="cuda:3")
import torch
model.set_parameters("ppo_cartpole.zip", device="cuda:3")
torch.cuda.empty_cache()
import time
print("watch gpustat now")
time.sleep(10)
print("memory on ep info device:", torch.cuda.memory_allocated("cuda:2"))# prints 0
Adjust the cuda devices according to your machine. Run once with train=True, then you can turn it off to make sure GPU usage comes from that.
Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?
Anyways, setting load_data=False should be safe since the data return value is not used and it fixes the problem: With this option, gpustat shows that only cuda:3 is used, as intended.
Here is the output of watch gpustat without the change:
Every 2.0s: gpustat myserver: Tue May 7 11:13:36 2024
myserver Tue May 7 11:13:36 2024 550.78
[0] NVIDIA GeForce RTX 2080 Ti | 36'C, 0 % | 262 / 11264 MB |
[1] NVIDIA GeForce RTX 2080 Ti | 35'C, 0 % | 262 / 11264 MB |
[2] NVIDIA GeForce RTX 2080 Ti | 31'C, 0 % | 416 / 11264 MB | ole(154M)
[3] NVIDIA GeForce RTX 2080 Ti | 34'C, 0 % | 418 / 11264 MB | ole(156M)
[4] NVIDIA GeForce RTX 2080 Ti | 33'C, 0 % | 262 / 11264 MB |
[5] NVIDIA GeForce RTX 2080 Ti | 33'C, 0 % | 262 / 11264 MB |
[6] NVIDIA GeForce RTX 2080 Ti | 33'C, 0 % | 262 / 11264 MB |
[7] NVIDIA GeForce RTX 2080 Ti | 33'C, 0 % | 262 / 11264 MB |
Is there anything I should still do for merger?
Is there anything I should still do for merger?
From your side, nothing for now ;)
What is missing is from my side. I need to take some time to reproduce the problem and check that nothing else would be impacted.
I took the time to look at your example closely but I don't understand model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")]), this is not supposed to contain any torch variable.
Btw, PPO is usually faster when everything is on CPU (when not using a CNN).
Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?
I'm also not sure to understand this.
What is the expected result? memory allocated on gpu:3 but no memory allocated on gpu:2?
I took the time to look at your example closely but I don't understand
model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")]), this is not supposed to contain any torch variable.
Oh... I am using an internal framework built on stable baselines. I do not entirely understand what it is doing but apparently we have a use case for putting torch variables in this buffer. To reproduce the issue, I did this as well.
Btw, PPO is usually faster when everything is on CPU (when not using a CNN).
Thanks for the tip!
Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?
I'm also not sure to understand this. What is the expected result? memory allocated on
gpu:3but no memory allocated ongpu:2?
Exactly! I do not want to touch the gpu that was used for training when doing inference.
From my understanding, there is two issues here:
- We should not put torch variables on the episode info buffer in our internal framework
- The episode info buffer is loaded from storage but then not being used. Normally, this is not a problem but if you happened to misuse the info buffer with torch variables, a GPU gets activated unexpectedly, allocating 150mb of storage.
While it is arguable that 2. is only a problem because of 1., loading the data is superfluous since the data return value is unused. I do not see how it would hurt to merge this pr.