tvm
tvm copied to clipboard
[BUG] Tensor intrinsic dtype mismatch compiling quantized depthwise convolution on `arm_cpu` target
trafficstars
Expected behaviour:
When an arm_cpu target is used, the model should compile successfully without an error.
Actual behaviour:
When an arm_cpu target is used, the model fails to compile during tensorization
E at /workspaces/tvm/src/tvm/src/te/operation/tensorize.cc:334
E File "/workspaces/tvm/src/tvm/src/te/operation/tensorize.cc", line 334
E TVMError: Failed to match the data type with TensorIntrin tensor_intrin's declaration provided=int64, intrin=int32
Environment:
Tested with TVM at 6a3fadc0654ecf9557ffe08d24677684c96e80b0. The issue was found as result of the changes in #16513, however it can be reproduced without as described below.
How to reproduce:
Run the test pytest tests/python/frontend/tflite/test_forward.py -k test_forward_quantized_depthwise_convolution
with an arm_cpu target. Note: Reminder to remove any skip condition that exists in the test currently.
Likely the schedule selection in relay/strategy/arm_cpu.py needs to check compatibility of the output data type before adding the schedule to the strategy.