OnnxToTorch lowering resize op
However are we able to use
torchvisionto generate an end-to-end test? If not can you link to a external e2e test used to validate the numerics?
We have these test cases running in SHARK-TestSuite / IREE: https://github.com/openxla/iree/blob/ee32fc7b74b4aad49fc46d67ea6c1f617416c4d4/experimental/regression_suite/external_test_suite/config_cpu_llvm_sync.json#L498-L534
Imported .mlir files for those are found in subfolders matching test case names in https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated
The original ONNX files for that are here: https://github.com/onnx/onnx/tree/main/onnx/backend/test/data/node
If you want to fully validate numerics on IREE, you can follow the instructions in https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests to see if those XFAIL tests flip to passing. For staying in torch-mlir, you could look at the source .onnx and imported .mlir files and fit those to how tests are already configured here.
@aldesilv, you should add QuantizedMLP_basic to the xfail with the torch version check.
QuantizedMLP_basic
I have updated the PR. Let's wait for CI to pass.
Great work! I'm looking forward to this getting merged :)
Hi @aldesilv, after addressing the above comments, the PR can be merged.
I'm seeing various downstream failures with this: https://github.com/iree-org/iree/issues/17345.
- A few test cases (like 2/30) are now passing
- A few test cases that were failing are now failing at different places
- A few test cases are compiling but producing incorrect results at runtime
- A few test cases are hanging (either compiling or running, not sure)