EfficientNet-PyTorch
EfficientNet-PyTorch copied to clipboard
Can not use torch.jit.script to export model
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: at /usr/local/lib/python3.7/dist-packages/torch/autograd/function.py:26:25 def mark_dirty(self, *args): ~~~~~ <--- HERE r"""Marks given tensors as modified in an in-place operation.
**This should be called at most once, only from inside the**
:func:`forward` **method, and all arguments should be inputs.**
Every tensor that's been modified in-place in a call to :func:`forward`
should be given to this function, to ensure correctness of our checks.
It doesn't matter whether the function is called before or after
modification.
'SwishImplementation' is being compiled since it was called from 'torch.efficientnet_pytorch.utils.MemoryEfficientSwish.forward'
It seems, that you need to make model.set_swish(False) before exporting the model.
wouldn't it be possible to make the MemoryEfficientSwish JIT compatible?
It seems, that you need to make
model.set_swish(False)before exporting the model.
i use the following code to export jit model: import torch from efficientnet_pytorch import EfficientNet model = EfficientNet.from_pretrained('efficientnet-b0') model.set_swish(False) model.eval() traced_script_module = torch.jit.script(model) traced_script_module.save(r"efficientdet-d0_traced.pth") But error occurred as following: RuntimeError: Tried to access nonexistent attribute or method 'expand_ratio' of type 'Tuple[int, int, List[int], int, int, int, float, bool]'.: File "\EfficientNet-PyTorch\efficientnet_pytorch\model.py", line 93 # Expansion and Depthwise Convolution x = inputs if self._block_args.expand_ratio != 1: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE x = self._expand_conv(inputs) x = self._bn0(x)
is there any setting i missed?
Hi, I'm stuck in the same place as you did, have you found a solution?
Hi, I'm stuck in the same place as you did, have you found a solution?
This is the solution:
It seems, that you need to make model.set_swish(False) before exporting the model.
Unfortunately, in my case it doesn't help. I do:
from efficientnet_pytorch import EfficientNet
import torch
model = EfficientNet.from_pretrained('efficientnet-b0')
model.set_swish(False)
scripted_model = torch.jit.script(model)
and get: Tried to access nonexistent attribute or method 'expand_ratio' of type 'Tuple[int, int, int, int, int, bool, List[int], float]'.: File "C:\Users\xxx\AppData\Local\Continuum\anaconda3\envs\xxx\lib\site-packages\efficientnet_pytorch\model.py", line 76 # Expansion and Depthwise Convolution x = inputs if self._block_args.expand_ratio != 1: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE x = self._swish(self._bn0(self._expand_conv(inputs))) x = self._swish(self._bn1(self._depthwise_conv(x)))
You should use torch.jit.trace instead
Many thanks, with torch.jit.trace it works! 👍
It seems, that you need to make
model.set_swish(False)before exporting the model.
model.module.set_swish(False)
But what is jit.script is needed? For instance for jit.trace after quantization isn't working