MoViNet-pytorch
MoViNet-pytorch copied to clipboard
Model not compatible with TorchScript conversion (via torch.jit.script)
It is not possible to convert the model to TorchScript using the function torch.jit.script
. In particular, the code returns an error because of the usage of ...
in the line:
https://github.com/Atze00/MoViNet-pytorch/blob/c2d1edf48fc6c5259707f9d833f22171b4f63493/movinets/models.py#L276
Even changing the type-hint definition to overcome this problem, the conversion is not possible because the attribute activation
is initialized as None
and then filled with a Tensor
.
Unfortunately I have no plans to support TorchScript at the moment. I'm also not convinced that's the only problem, for example the computation of same padding could raise an error. If that's the case I suspect it would also be necessary to train the models from scratch in pytorch.
The type hint issue was easy to get around but even more problematic is the use of einops
which appears to be noncompat with torchscript
from torch import nn
import torch
from einops import rearrange
class Foo(nn.Module):
def __init__(self):
super(Foo, self).__init__()
def forward(self, x):
return rearrange(x, 'a b -> b a')
torch.jit.script(Foo())
...
NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/Users/sean/standard/.env/lib/python3.6/site-packages/einops/einops.py", line 393
def rearrange(tensor, pattern: str, **axes_lengths):
~~~~~~~~~~~~~ <--- HERE
It looks like tracing may be compatible, though is more restrictive, per the issue on einops https://github.com/arogozhnikov/einops/issues/115
foo = Foo()
traced_foo = torch.jit.trace(foo, torch.rand(3, 3))
Is there any solution to this problem?
I have a fork that modifies the models so that it can be exported to TorchScript or ONNX. https://github.com/Subalzero/MoViNet-pytorch