Prashant Kumar
Prashant Kumar
If a `vtensor.literal` op is passed to `aten.Float.Tensor` op, that can be canonicalized to a single float constant op.
I am trying to run https://gist.github.com/pashu123/9cb499308ec1db04e0af99e3a86a9251 which gives the training graph. To reproduce the error run `python gist-.py`. Make sure you have torch-mlir and functorch pip-installed. The error is: ```...
Decomposes static 4d and above cases of batch_matmul into linalg.batchmatmul with proper collapsing and expansion.
unet_torch is added to the shark tank. Dyanmic case is still missing will be adding and enabling that.
-- Dynamic cases needs to be handled in the flow dialect.
This makes it easy for the downstream compilers like MLIR to trace the code. # What does this PR do? Fixes # (issue) ## Before submitting - [ ] This...
The quantized_batch_matmul is lowered to batch_matmul. This PR just tries to extend the support by manipulating iterators and adding support for the batch dim.