Nithin Meganathan

Results 14 comments of Nithin Meganathan

Hi @silvasean, yes I'll work on this integration. Adding shape inference for this QPyTorch op and check if that can lower to TOSA would be the first step I think...

Quick update, I registered one qtorch op with dtype and shape inference following instructions in #895. I got the following Torch IR: ``` module attributes {torch.debug_module_name = "SimpleModel"} { func.func...

Reviving discussion on this - Was able to use custom op extension to register a qtorch op and did a rewrite from Torch -> TOSA as `tosa.custom` op with quantization...

This error is probably due to a missing op or unhandled variables in an op. Check all the ops where `%0` is used to triage the error further

Hi @benvanik, I'm looking into this issue again. tldr: We added a Level Zero HAL backend in IREE to target Intel GPU's which is giving incorrect results after this patch....

Sure Ben, I'll get a tracy profile of a program that gives wrong results on the backend. @ScottTodd Will definitely look into expanding the CTS covering this bug once I...

@benvanik Here's the trace of running mnist model on Level Zero backend which provides numerically incorrect results: [mnist_levelzero.tracy.zip](https://github.com/iree-org/iree/files/10562384/mnist_levelzero.tracy.zip)

I see okay, thanks for taking a look at this.

We can close this issue, the problem was with the LevelZero API