glow-pytorch
glow-pytorch copied to clipboard
z_list
Hi, Thanks for your nice work! I am new to the glow model, so I have some stupid questions, and I don't solve them even if I try to google.
The flow model can translate the input $x$ to latent space code $z$ by a sequence of $h$ models. In my understanding, we only need the last output $z$ to reconstruct the input $x$. Why we don't use the learned $z$ and use a random z_list?
I appreciate your answer and hope you have a good day!
If you need to reconstruct then you only need a output z lists. Random zs is used for sampling from the model.
Thanks for your answer! Why do we need a z list rather than a single z?
For each block half of outputs splitted into z and appended to zs.
Thanks a lot! I think I have more understanding of the glow model.
Hi, It is me again!
I can't understand the self.scale = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
and "out = out * torch.exp(self.scale * 3)" in the ZeroConv2d module. Could you please tell me what role this plays? I appreciate your answer!
`if args.n_bits < 8: image = torch.floor(image / 2 ** (8 - args.n_bits))
image = image / n_bins - 0.5`
Hi, a more question. Does this code mean image=image/256-0.5? What is the good of this way?
loss = -log(n_bins) * n_pixel
Hi~ Why do we need this constant loss? I appreciate your answer! Thanks a lot!
Hi, do you train the face for image_size 256*256? What is the setup of hyperparameters? I set the batch=2 and keep the other, the results are so bad.
The parallel train can't work. And if I change the n_block from 4 to 6 as suggested by the paper, the loss becomes Nan.