Support lowering `fb-opt` from `onnx` to `torch` dialect
The following operations fail to lower with examples. These operations are supposed to be finished and will need to be corrected
- [x]
onnx.Cast
%208 = torch.operator "onnx.Cast"(%207) {torch.onnx.to = 1 : si64} : (!torch.vtensor<[?,?,?,?],si64>) -> !torch.vtensor<[?,?,?,?],f32>
- [x]
onnx.Div
%227 = torch.operator "onnx.Div"(%222, %226) : (!torch.vtensor<[1,64,768],f32>, !torch.vtensor<[1,64,1],f32>) -> !torch.vtensor<[1,64,768],f32>
- [x]
onnx.Equal
%205 = torch.operator "onnx.Equal"(%204, %175) : (!torch.vtensor<[4],si64>, !torch.vtensor<[4],si64>) -> !torch.vtensor<[4],i1>
- [x] #2702
%221 = torch.operator "onnx.ReduceMean"(%220) {torch.onnx.axes = [-1 : si64]} : (!torch.vtensor<[1,64,768],f32>) -> !torch.vtensor<[1,64,1],f32>
The following operations have no implementations:
- [x] onnx.Concat (this is done in this commit)
- [x] #2711
- [ ] onnx.Expand https://github.com/nod-ai/SHARK-Turbine/issues/302 #2714
- [x] onnx.Gather #242
- [x] onnx.Max merged
- [x] onnx.Mul merged
- [x] onnx.Pow merged
- [x] onnx.Reshape merged
- [x] #2696
- [x] onnx.Split merged
(after creating an issue to claim the above ops, please also register that you're working on it here in the big ONNX tracking issue)
Made a gist with some test cases for the ones with examples. Looks like Div already works as of today. I'll be tackling onnx.Equal.
Equal from this commit has an unecessarily high opset version (19) when i'm pretty sure we can support all the way down to 1
I noticed a couple of others like that too. Would be good to do a scrub: https://github.com/llvm/torch-mlir/issues/2700