sonnet
sonnet copied to clipboard
(Maybe) in-consistency between VQ-VAE paper and its implementation.
FIrst of all, maybe it is my misunderstanding of the paper, so hope somebody could explain it for me, thanks! :
in the paper, the loss is defined as
where e
is the codebook defined at the beginning of the Section:
So, in the paper, the codebook loss and commitment loss are MSE between z_e(x)
and e
.
However, in the implementation, they are implemented as MSE between z_e(x)
(inputs) and z_q(x)
(quantized), where variable quantized means quantized encoding of the image, namely z_q
:
Are they actually the same thing? why?
- If the paper stated is right. how the dimension matches between
z_e(x)
(H' * W' * D) ande
(K * D)? - if the implementation is right. how
z_q(x)
(quantized) backprop since its calculation contains argmin?
Probably, e
in the loss formula in the paper actually stands for the z_q(x)
. The author did not write as z_q(x)
because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively as z_q(x)
, because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.
That is my new understanding.
Probably,
e
in the paper stands for thez_q(x)
. The author did not write asz_q(x)
because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively asz_q(x)
, because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.That is my new understanding.
please close this if admin thinks this explanation is right.