vq-vae-2-pytorch
vq-vae-2-pytorch copied to clipboard
Decoder Top vs Upsamping layer
trafficstars
Hi @rosinality, thank you a lot for your contribution.
I have a question regarding the upsampling of the top latent representation.
We use top decoder to upsample the quantized top representation by the factor of 2, during encoding process.
During decoding, the sampled codes are quantized and then upsampled using separated ConvTranspose layer (_upsample_t).
My question is - why cannot we use top decoder here again? why do we have to add that separated layer which learns that same mapping?