Sean Silva
Sean Silva
Yeah, Python MP is not great so I am not surprised it isn't better on 2 cores. +1 for limiting it.
We landed this a while ago!! Great work team!
> Looks like something broke the ref backend tests. The output from the failures is pretty slim and that limits any further insights I might have (I wish we biased...
(the usual answer is to run single-threaded with -s, but that doesn't help on the CI or causes an extra iteration step)
> This was my concern as well, but some of the cases we are starting to see where we need support for aten.view were difficult with the previous approach so...
I think the collapse to 1d + expand approach is a good one that is quite general. We should only use it when there is no other way to handle...
> > I think the collapse to 1d + expand approach is a good one that is quite general. We should only use it when there is no other way...
Closing this as it appears we found a solution. FYI we have a custom ops RFC as well: https://github.com/llvm/torch-mlir/issues/1462
Hi @LucQueen, you can find documentation for adding a new operator in https://github.com/llvm/torch-mlir/wiki/Torch-ops-E2E-implementation And for general development advice, https://github.com/llvm/torch-mlir/blob/main/development.md
+1, this should be quite easy to add by repeating https://reviews.llvm.org/D123981 + https://github.com/llvm/torch-mlir/pull/761 but for float16 @FullZing -- can you do that?