flowtron
flowtron copied to clipboard
pack_padded_sequence cuda fix
Hi! I experienced some troubles with inference based on model inner operations:
It seems like for pytorch 1.7.0, torch.nn.utils.rnn.pack_padded_sequence's src_length must be in cpu, even if we're using cuda: pytorch/pytorch#43227
I also tried this command in pytorch 1.6.0 to check for backward compatibility and there it works fine, both with and without .cpu()