glow-pytorch
glow-pytorch copied to clipboard
PyTorch implementation of Glow
Hello! Great work on following the implementation of the paper in the code. Makes the paper very lucid. I have a couple of questions on the AffineCoupling class under the...
I think there might be a tiny mistake in the dequantization process at the moment. I think that https://github.com/rosinality/glow-pytorch/blob/master/train.py#L99 should be `n_bins = 2. ** args.n_bits - 1.` rather than...
the n_bits not only add some noise but influence the training a lot, but what does it mean?
Hi, Thank you for your great implementation. Regarding the "learned" prior, I wanted to ask: 1- Why are you considering the prior to be a Gaussian with **trainable** parameters rather...
I use CelebA 64x64 5-bit for training(4 GPUs), about 2 hours later, the loss is as low as 1.1. At the same time, the sampled image has low visual quality....
When I try to train on one-channel images, there are dimension mismatches in the initialization of the forward function when using a 32x32 image custom dataset. I get the following...
Thank you for the repo! Upon reloading the Glow model from save after training, cross entropy performance diminished. This was likely because the ActNorm module was being reinitialized on the...
Wow, those generated samples look very good! Do you have any plans to release the model checkpoints (on Google Drive / Dropbox)?
Hi , I try to base your code run on MNIST , but I get negative value loss . How can I compute NLL value per dims ? Sorry ,...
Hi! I tried to run your code. The network starts out training well and decreases the loss but after a few iterations, the loss just starts to increase. I tried...