emitted aten.norm.ScalarOpt_dim
Can you rebase your branch to resolve the conflicts so that the CI can run?
Can you rebase your branch to resolve the conflicts so that the CI can run?
Can you add the div.int op in a separate PR? This makes the PRs more self-contained and easier to review
Hey @albertdmath, you can also run the lit tests locally by doing cmake --build build --target check-torch-mlir-all.
The error I've been getting is that when I use floats the error calls for ints and when I use ints it calls for floats.
Just to confirm, the cast would happen with AtenToDtypeOp?
I'm also assuming this new approach would mean reverting the changes I made to TorchOps.cpp.
Just to confirm, the cast would happen with AtenToDtypeOp?
Can you create the lowering for AtenDivInt in a separate PR, since it is not really related to the goal of this PR.
AtenToDtypeOp takes a tensor as an input, so that would not work with your op. You should be able to use AtenFloatScalar to cast the scalars to float
Alright, I've created a separate PR for aten.div.int, please let me know if it looks good. Since this op depends on aten.div.int, I first want to get aten.div.int merged in, then take out the work for aten.div.int from this branch and get this merged in.
Any progress with this PR?