Sayan Saha

Results 19 comments of Sayan Saha

Thanks for the feedback @vivekkhandelwal1. I've asked in the PyTorch community https://github.com/pytorch/pytorch/issues/153149 whether supporting a tuple of single int is an intended behavior and documentation needs to be enhanced to...

Hi @vivekkhandelwal1, PyTorch devs clarified in https://github.com/pytorch/pytorch/issues/153149 that the behavior of single element tuple is same as single element for the `kernel` param. I also asked how to fix the...

> > Hi [@vivekkhandelwal1](https://github.com/vivekkhandelwal1), PyTorch devs clarified in [pytorch/pytorch#153149](https://github.com/pytorch/pytorch/issues/153149) that the behavior of single element tuple is same as single element for the `kernel` param. I also asked how to...

@vivekkhandelwal1, I agree that `DecomposeComplexOps` is not the correct pass to insert the logic to extend the params. I don't think there's an existing pass to do this either --...

One option is to use PyTorch's decomposition to decompose the op into core ATEN ops before importing into MLIR. Decomposing early leads to successful importing with `tosa` IR: ``` import...

> I'm not sure how much this practically matters, but `min(x, inf)` is only the same as `min(x, realmax)` if `x` is finite, so this change will possibly result in...

> One alternative is to introduce a flag, such as `SupportNonFinites`, through the pass-pipeline (and probably `fx.export_and_import`) which by default will be `True` and preserve the current behavior of `linalg-on-tensors`...

`torchscript.compile` has been deprecated for a while, have you tried the newer `fx.export_and_import` API https://github.com/llvm/torch-mlir/blob/60ffb919b465a9cce77e5c4454ae5958e9350fd8/projects/pt1/examples/fximporter_resnet18.py#L32 ?

The current supported path for importing PyTorch models into MLIR via `fx.export_and_import` requires the PyTorch model to be exportable to `torch.export.ExportedProgram` format via the `torch.export.export` API. Models quantized following the...