warpgrad
warpgrad copied to clipboard
Out of GPU Memory
Hi! I met a problem. When I apply my dataset and my resnet50 model and run the comman "python main.py --meta_model warp_leap --suffix myrun18", I found that the GPU memory will keep growing until the limit is exceeded and the program passively stops. Then I found that when I comment self._state_buffer[slot].append(clone_state(state, device='cpu'))
in warpgrad/warpgrad.py, the problem won't appear. But isn't this code supposed to increase the CPU's memory and don't change GPU memory? Why GPU memory would become larger and larger? It is so weired.