glow-pytorch icon indicating copy to clipboard operation
glow-pytorch copied to clipboard

Flow not perfectly invertible

Open sidney1505 opened this issue 2 years ago • 4 comments

Hi, given an input image tensor x and extracting the glow model i tried the following:

latent = glow(x)[2] x_reconstructed = glow.reverse(latent)

Since it is a normalizing flow one would expect, that x_reconstructed is very similar to x, since the only source of errors should be rounding errors. However, I observe very big differences. Does anybody has an explanation for that?

sidney1505 avatar Aug 03 '22 12:08 sidney1505

Glow has a downsample layer which just throws away a half of data (along channel). In the reverse process, the threw-away part is sampled from a prior. You can modify the codes to keep those threw-away parts and replace them with the sampling process, then you can get a perfectly inverted results.

gitlabspy avatar Aug 18 '22 12:08 gitlabspy

Hi, do you have solved this problem? could you share that? thanks a lot!

Moleculebo avatar Feb 28 '23 13:02 Moleculebo

Hi, given an input image tensor x and extracting the glow model i tried the following:

latent = glow(x)[2] x_reconstructed = glow.reverse(latent)

Since it is a normalizing flow one would expect, that x_reconstructed is very similar to x, since the only source of errors should be rounding errors. However, I observe very big differences. Does anybody has an explanation for that?

Hi, I once suspect that there is something wrong with this code, but I re-read the code carefully and notice this line:

https://github.com/rosinality/glow-pytorch/blob/master/model.py#L301

So your code can be modified to the following:

latent = glow(x)[2]
x_rec = glow.reverse(latent, reconstruct=True)

ThoseBygones avatar Mar 09 '23 02:03 ThoseBygones

Hi, given an input image tensor x and extracting the glow model i tried the following: latent = glow(x)[2] x_reconstructed = glow.reverse(latent) Since it is a normalizing flow one would expect, that x_reconstructed is very similar to x, since the only source of errors should be rounding errors. However, I observe very big differences. Does anybody has an explanation for that?

Hi, I once suspect that there is something wrong with this code, but I re-read the code carefully and notice this line:

https://github.com/rosinality/glow-pytorch/blob/master/model.py#L301

So your code can be modified to the following:

latent = glow(x)[2]
x_rec = glow.reverse(latent, reconstruct=True)

Thanks, it works.

VV20192019 avatar May 16 '23 10:05 VV20192019