pytorch-vq-vae icon indicating copy to clipboard operation
pytorch-vq-vae copied to clipboard

dimension issue

Open jlian2 opened this issue 4 years ago • 0 comments

``
def forward(self, inputs): # convert inputs from BCHW -> BHWC inputs = inputs.permute(0, 2, 3, 1).contiguous() input_shape = inputs.shape

    # Flatten input
    flat_input = inputs.view(-1, self._embedding_dim)

`` My unders understanding is: dimension of flat_input should be BHWC*embedding_dim, one dimension seems to be missing? Or you are saying number of channels equal to embedding_dim?

jlian2 avatar Jun 29 '20 03:06 jlian2