Gaurav Shukla
Gaurav Shukla
> What happens when you use 3, 4, 3 instead of -1, -1, -1 in the annotate args ? It passed successfully! But dynamic dimensions causes runtime error: `python: /home/gaurav/MLIRepos/torch-mlir/build/tools/mlir/include/mlir/IR/BuiltinTypeInterfaces.h.inc:163:...
@sjarus Did you get a chance to look at it?
@sjarus I have made the test case partially static for now in order to pass the tosa test. Should I create a new issue for the dynamic case handling in...
@silvasean Thanks for adding support for more control over decomposition based on backend. Could you please review this again?
This looks like a part of `AtenLinearOp` lowering to linalg. I can modify the same lowering(linalg conversion pass) to separate out the bias addition, but moving the lowering to decomposition...
There is an open patch for the decomposition: https://github.com/llvm/torch-mlir/pull/862 CI fails for this PR because the `aten.matmul` op does not handle higher dimensional cases. A specific test case of (3D,2D)...
Another related issue: https://github.com/llvm/torch-mlir/issues/879
@silvasean I am working on this, will take care of #919 also. thanks!
> Also I think this is only an optimization when the `transpose` is the only user of the original `permute`. Otherwise, we would be replacing one `transpose` with a `listcontruct`...
I think one possible approach could be through `tm_tensor` dialect like this: 1. The `torch.scatter(self, dim, index, src)` updates self like: ``` For a 3-D tensor, self is updated as:...