VideoGPT
VideoGPT copied to clipboard
VQ-VAE VQ-loss is missing? only find reconstruct, and commitment loss
Hi, I am currently studying the VideoGPT and have some doubts on VQ-VAE losses.
Where is VQ loss?
- In the original paper, there're 3 loss, a reconstruction loss, a VQ loss (bring embedding to encoder_output.detach()). and a commitment loss (bring encoder_output to the embdding.detach()).
- I could only find the commitment loss implemented, but not the VQ loss.
Any information to resolve the situation will be highly appreciated
That is because the codebook is being updated using exponential moving average (EMA), not by the gradient of the codebook loss (see line 176 of vqvae.py).
It's shown in this paper that EMA-based updation is equivalent to updating the codebook using SGD over codebook loss.