Scott Todd

Results 405 comments of Scott Todd

OOC why are the repro steps using `iree-opt`? ``` iree-opt -pass-pipeline='builtin.module(func.func(convert-torch-onnx-to-torch))' model.torch_onnx.mlir -o model.torch.mlir iree-opt -pass-pipeline='builtin.module(torch-lower-to-backend-contract,func.func(torch-scalarize-shapes),torch-shape-refinement-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline)' model.torch.mlir -o model.modified.mlir ``` That sort of manual pipeline specification is unsupported. For any...

> But, removing the input shapes from `iree-benchmark-module` and only using numpy files as the inputs I was able to run/benchmark without this error. Numpy files contain shape information (metadata...

This would probably be nice to have, though I don't think it is strictly required right now, at least when building 'pytorch' and 'audio' together. I minimally just needed https://github.com/pytorch/audio/pull/3991...

A few random thoughts: * Other segmentation models to consider: https://huggingface.co/models?library=onnx&sort=likes&search=segment . That filters to those with .onnx files if you want something off-the-shelf, could also try another framework source...

I have a functional WASM HAL backend for IREE at https://github.com/google/iree/pull/5096 using [WAMR](https://github.com/bytecodealliance/wasm-micro-runtime) that can run MNIST, BERT, and our other supported models. It's very slow right now, probably due...

Tips for debugging: https://iree.dev/developers/debugging/compile-time-regressions/

> However are we able to use `torchvision` to generate an end-to-end test? If not can you link to a external e2e test used to validate the numerics? We have...

I'm seeing various downstream failures with this: https://github.com/iree-org/iree/issues/17345. * A few test cases (like 2/30) are now passing * A few test cases that were failing are now failing at...

> I have had a similar issue with **large runners with slow checkout and cleanup** that I reported to GitHub Support. They concluded that it is related this issue, even...

Yep, I'm still seeing substantially slower checkouts on large runners (could break that out into a different issue, and we have a support ticket for it). Latest experiments on https://github.com/openxla/iree/pull/12051,...