stylegan2-pytorch
stylegan2-pytorch copied to clipboard
PL reg memory consumption
Hello. I have the following setting: batch size 12, grad accumulation 4, image_size 128. At 5024 iterations PL reg comes in, and consumes A LOT of memory. 7 GB utilization before, 11 GB after. Is it necessary to keep a graph for PL? Is your realization memory-efficient?
Same here. After 20000 iterations there is such a sudden increase in memory consumption that makes the execution to crash.
I think the problem is in the following code: pl_noise = torch.randn(images.shape, device=device) / math.sqrt(num_pixels) The noise becomes batchsize-dependent, thus occupying more space on GPU when retaining the graph.
I also have this exact problem, everything shuts down at the 5th epoch, any way to fix it?
i am having the same problem now. when I tried to generate images with a resolution higher than 256 and the same error comes up over and over again. Is it possible to allocate the pl_noise somewhere else and still make it interact with the rest of the code?