CloserLookFewShot
CloserLookFewShot copied to clipboard
GPU memory continue to grow for MAML
Hi,
Thank you for sharing the code. When running MAML with conv4 backbone, the memory usage accumulates as epoch increases, causing CUDA out of memory. It seems the problem is caused by grad = torch.autograd.grad(set_loss, fast_parameters, create_graph=True). I tried to set create_graph=False (to approximate first-order MAML), and the memory usage becomes normal. This indicates that the created graph cannot be released after each epoch, if setting create_graph=True.
Did you meet with such problem in training MAML? May I get some suggestions on solving this problem?
Thanks!
The problem has been resolved by degrade torch version. Please forget about it. :)