torchdiffeq
torchdiffeq copied to clipboard
Bug: Memory Leaky with from torchdiffeq import odeint
When I use autograd, I found there is memory leak bug. Even with
gc.collect()
torch.cuda.empty_cache()
The allocated memory is still growing with the increments of iteration.
logging.info("memory_allcoated(MB) {}".format(torch.cuda.memory_allocated()/1048576))
I believe you have noticed the similar bug, could you provide me some solutions?
Do you have a minimal working example, and your PyTorch version?