glow-pytorch
glow-pytorch copied to clipboard
[encoder part problem] generate_z()
In the inference part, we should use the selected image to generate the latent z value()
as show in 'z_base = graph.generate_z(dataset[base_index]["x"])'
But in the glow/models.py, we will repeat to (B, 1, 1, 1).
I do not why and the operator will need too much GPU memory. And I always got out of memory.
Could you help me?
Did "Out of memory" happen in inference phase? What's the batch size and the GPU memory size of your device?
yeap:
In the training phase, I set batch_size=48, and each device will assign 12 batches about 4GB in my TITAN V (12G)
In inference phase, the 'generate_z' operator need repeat to (B, 1, 1, 1) where B = Train.batch_size (48), and this will induce 4GB x 4 > 12G. so...
You can use small batch size in inference phase. It’s not necessary to be same as training phase.