Convert Flexible Input shapes problem. ValueError: @max_pool does not support symbolic input spatial shape when ceil_mode is True? What is it mean?
#example_input = torch.rand(1, 3, 320, 320) # after test, will get 'size mismatch' error message with size 256x256 #traced_model = torch.jit.trace(model, example_input) #inputs=[ct.ImageType()]
Range for the sequence dimension to be between [1, 50]
input_shape = ct.Shape(shape=(1,3, ct.RangeDim(), ct.RangeDim()) ) print(input_shape.shape) print(example_input.shape) #shapes = [(1,3, 6i, 6i) for i in range(1, 100)] #input_shape = ct.EnumeratedShapes(shapes=shapes) #model_input=ct.TensorType(name="image",shape=input_shape) print("#####"+str(ct.SPECIFICATION_VERSION)) model2 = ct.convert( model, #inputs=[ct.TensorType(name="image",shape=example_input.shape)], inputs=[ct.TensorType(name="image",shape=input_shape.shape)], #inputs=[ct.TensorType(name="image",shape=input_shape.shape)], //have tried, the same error #outputs=[ct.ImageType(name="outImage",shape=example_input.shape)], minimum_deployment_target=ct.target.iOS13, )
(1, 3, RangeDim(lower_bound=1, upper_bound=-1, default=1, symbol="is0"), RangeDim(lower_bound=1, upper_bound=-1, default=1, symbol="is1"))
torch.Size([1, 3, 320, 320])
#####1
WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s]
Traceback (most recent call last):
File "pth2ml.py", line 57, in
❓Question
System Information
- If applicable
@Siq1982 - can you please give us complete code that reproduces this issue? It's very difficult to help you without being able to reproduce it. I see that you did share some code. However a lot of it is commented out. It's not clear what should be run. Also model is never defined.
Thanks for your reply. Here is the attach to re-produce the issue. I tried all the ways i can imagine. But all in vain. I am still struggling of it. reproduce_error.zip
@Siq1982 - can you please give us complete code that reproduces this issue? It's very difficult to help you without being able to reproduce it. I see that you did share some code. However a lot of it is commented out. It's not clear what should be run. Also
modelis never defined.
I can't load your model. What versions do you have of the following Python packages: torch, torchvision, coremltools? Also what operating system are you using?
torch 1.7.0+cu101 torchsummary 1.5.1 torchtext 0.3.1 torchvision 0.8.1+cu101 coremltools in /usr/local/lib/python3.6/dist-packages (4.1) Colab with GPU
== Notice that: === (1, 16, is8, is9) What is this mean?
Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s]2
%714: (1, 16, is8, is9, fp32)(Tensor)
(1, 16, is8, is9)
(is8, is9)
x_spatial_dimensions
Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s]
Traceback (most recent call last):
File "pth2ml.py", line 63, in
The reproduce code contains an implementation of upsample_bilinear2d. Coremltools 6.0 now supports this layer type. Our implementation looks quite a bit different than yours.
@Siq1982 - Does the model work with coremltools 6.0 if you remove your implementation of upsample_bilinear2d? If this is still an issue, please give steps to reproduce without loading an external model. Loading an arbitrary PyTorch model is insecure.