jasper
jasper copied to clipboard
Output size is weird
On the README example,
inputs.size() # => torch.Size([3, 1234, 80])
output.size() # => torch.Size([3, 556, 10])
How do I save the time-dimension?
Seems like the first block in the encoder and the block of the decoder should both be 1-block instead. This change makes calculate padding instead of getting it to 1.
NVIDIA's DeepLearningExamples JASPER helped me to get this: 1, 2.