coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Convert Flexible Input shapes problem. ValueError: @max_pool does not support symbolic input spatial shape when ceil_mode is True? What is it mean?

Open Siq1982 opened this issue 4 years ago • 5 comments

#example_input = torch.rand(1, 3, 320, 320) # after test, will get 'size mismatch' error message with size 256x256 #traced_model = torch.jit.trace(model, example_input) #inputs=[ct.ImageType()]

Range for the sequence dimension to be between [1, 50]

input_shape = ct.Shape(shape=(1,3, ct.RangeDim(), ct.RangeDim()) ) print(input_shape.shape) print(example_input.shape) #shapes = [(1,3, 6i, 6i) for i in range(1, 100)] #input_shape = ct.EnumeratedShapes(shapes=shapes) #model_input=ct.TensorType(name="image",shape=input_shape) print("#####"+str(ct.SPECIFICATION_VERSION)) model2 = ct.convert( model, #inputs=[ct.TensorType(name="image",shape=example_input.shape)], inputs=[ct.TensorType(name="image",shape=input_shape.shape)], #inputs=[ct.TensorType(name="image",shape=input_shape.shape)], //have tried, the same error #outputs=[ct.ImageType(name="outImage",shape=example_input.shape)], minimum_deployment_target=ct.target.iOS13, )

(1, 3, RangeDim(lower_bound=1, upper_bound=-1, default=1, symbol="is0"), RangeDim(lower_bound=1, upper_bound=-1, default=1, symbol="is1")) torch.Size([1, 3, 320, 320]) #####1 WARNING:root:Tuple detected at graph output. This will be flattened in the converted model. Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s] Traceback (most recent call last): File "pth2ml.py", line 57, in minimum_deployment_target=ct.target.iOS13, File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/_converters_entry.py", line 182, in convert **kwargs File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/converter.py", line 129, in mil_convert ConverterRegistry, **kwargs) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto prog = frontend_converter(model, **kwargs) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/converter.py", line 85, in call return load(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/load.py", line 83, in load raise e File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in load prog = converter.convert() File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/converter.py", line 227, in convert convert_nodes(self.context, self.graph) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/ops.py", line 58, in convert_nodes _add_op(context, node) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/ops.py", line 694, in max_pool2d _max_pool(context, node, inputs) File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/frontend/torch/ops.py", line 669, in _max_pool raise ValueError("@max_pool does not support symbolic input spatial shape when ceil_mode is True") ValueError: @max_pool does not support symbolic input spatial shape when ceil_mode is True

❓Question

System Information

  • If applicable

Siq1982 avatar Feb 16 '21 13:02 Siq1982

@Siq1982 - can you please give us complete code that reproduces this issue? It's very difficult to help you without being able to reproduce it. I see that you did share some code. However a lot of it is commented out. It's not clear what should be run. Also model is never defined.

TobyRoseman avatar Feb 16 '21 21:02 TobyRoseman

Thanks for your reply. Here is the attach to re-produce the issue. I tried all the ways i can imagine. But all in vain. I am still struggling of it. reproduce_error.zip

@Siq1982 - can you please give us complete code that reproduces this issue? It's very difficult to help you without being able to reproduce it. I see that you did share some code. However a lot of it is commented out. It's not clear what should be run. Also model is never defined.

Siq1982 avatar Feb 16 '21 22:02 Siq1982

I can't load your model. What versions do you have of the following Python packages: torch, torchvision, coremltools? Also what operating system are you using?

TobyRoseman avatar Feb 16 '21 22:02 TobyRoseman

torch 1.7.0+cu101 torchsummary 1.5.1 torchtext 0.3.1 torchvision 0.8.1+cu101 coremltools in /usr/local/lib/python3.6/dist-packages (4.1) Colab with GPU

Siq1982 avatar Feb 17 '21 15:02 Siq1982

== Notice that: === (1, 16, is8, is9) What is this mean?

Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s]2 %714: (1, 16, is8, is9, fp32)(Tensor) (1, 16, is8, is9) (is8, is9) x_spatial_dimensions Converting Frontend ==> MIL Ops: 0% 0/1449 [00:00<?, ? ops/s] Traceback (most recent call last): File "pth2ml.py", line 63, in

Siq1982 avatar Feb 18 '21 17:02 Siq1982

The reproduce code contains an implementation of upsample_bilinear2d. Coremltools 6.0 now supports this layer type. Our implementation looks quite a bit different than yours.

@Siq1982 - Does the model work with coremltools 6.0 if you remove your implementation of upsample_bilinear2d? If this is still an issue, please give steps to reproduce without loading an external model. Loading an arbitrary PyTorch model is insecure.

TobyRoseman avatar Oct 25 '22 18:10 TobyRoseman