onnx-tensorflow
onnx-tensorflow copied to clipboard
conversion pytorch to onnx to savedModel -> wrong concrete functions (__call__)
Describe the bug
After an export of pytorch pretrained resnet101, and it conversion into savedModel with onnx-tf, it seems that even if SignatureDefs are present, ConcreteFunction are not well converted :
Concrete Functions:
Function Name: '__call__'
Named Argument #1
input
Function Name: 'gen_tensor_dict'
And then, if i want to save the model by redefining the SignatureDefs, the saving never stop and seems frozen :
t_spec = tf.TensorSpec(shape=(None, 3, 32, 32), dtype=tf.float32, name='input') c_func = loaded_model.__call__.get_concrete_function(input=t_spec)
To Reproduce
Please give us instructions to reproduce your problem.
import torch
import torchvision
import tensorflow as tf
# paths : model_path_pth, model_path_onnx, model_path_sm, resaved_model_path
# model_name : resnet101.onnx
if __name__ == '__main__':
cuda = torch.device('cuda:1')
with torch.cuda.device(1):
dummy_input = torch.randn(1, 3, 32, 32, device=cuda, requires_grad=True)
model = torchvision.models.resnet101(weights=True)
model.load_state_dict(torch.load(model_path_pth))
model.to(cuda)
model.eval()
torch.onnx.export(model,
dummy_input,
model_path_onnx,
export_params=True,
opset_version=15,
do_constant_folding=True,
input_names = ['input'],
output_names = ['output'],
dynamic_axes={'input' : {0 : 'batch_size'},
'output' : {0 : 'batch_size'}})
!onnx-tf convert --device CUDA -i resnet101.onnx -o resnet101_sm
loaded_model = tf.saved_model.load(model_path_sm)
t_spec = tf.TensorSpec(shape=(None, 3, 32, 32), dtype=tf.float32, name='input')
c_func = loaded_model.__call__.get_concrete_function(input=t_spec)
tf.saved_model.save(loaded_model, resaved_model_path, signatures=c_func) # -> never stop
Python, ONNX, ONNX-TF, Tensorflow version
This section can be obtained by running get_version.py from util folder.
- Python version: 3.9.7
- ONNX version: 1.12
- ONNX-TF version: 1.10
- Tensorflow version: 2.9.1
Additional context
Even if i try to reimplement the call method, i am not able to call model(input_tensor).
Ok, without a shape with None argument, it works. Is there an alternative to support None for shape of tensor ?