Chi_Liu
Chi_Liu
This linalg patch can test tosa with tinny change [https://github.com/nod-ai/SHARK/pull/341](https://github.com/nod-ai/SHARK/pull/341). But now get the bug ``` raise TorchMlirCompilerError(trimmed_message) from None torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering Torch Backend IR -> TOSA Backend IR failed...
same issue in [https://github.com/llvm/torch-mlir/issues/961](https://github.com/llvm/torch-mlir/issues/961). Though it mentioned the bug has been fixed, it is turn out not in gpt2. [https://github.com/llvm/torch-mlir/pull/1017](https://github.com/llvm/torch-mlir/pull/1017) Might need to add some other op.int support
Byte bug patch https://github.com/llvm/torch-mlir/pull/1384 and View Bug Patch https://github.com/llvm/torch-mlir/pull/1353, after apply this two patch , we got broadcast_to bug. ``` Traceback (most recent call last): File "/home/chi/src/ubuntu20/shark/SHARK/tank/gpt2_torch/gpt2.py", line 89, in...
> > Byte bug patch [llvm/torch-mlir#1384](https://github.com/llvm/torch-mlir/pull/1384) and View Bug Patch [llvm/torch-mlir#1384](https://github.com/llvm/torch-mlir/pull/1384) , after apply this two patch , we got broadcast_to bug. > > ``` > > Traceback (most recent...
Some usefully tosa support link https://discourse.llvm.org/t/basic-tosa-support/4432
> > Byte bug patch [llvm/torch-mlir#1384](https://github.com/llvm/torch-mlir/pull/1384) and View Bug Patch [llvm/torch-mlir#1384](https://github.com/llvm/torch-mlir/pull/1384) , after apply this two patch , we got broadcast_to bug. > > ``` > > Traceback (most recent...
General update conclusion for right now. [aten.view](https://github.com/llvm/torch-mlir/pull/1353) MERGED [aten.sum.dim_IntList test](https://github.com/llvm/torch-mlir/pull/1387) MERGED [aten.clone](https://github.com/llvm/torch-mlir/pull/1388) MERGED [torch.uint8 type support](https://github.com/llvm/torch-mlir/pull/1384) MERGED [aten.broadcast_to](https://github.com/llvm/torch-mlir/pull/1429) MERGED [torch.aten.arange.start_step](https://github.com/llvm/torch-mlir/pull/1442) MERGED [torch.aten.where.self](https://github.com/llvm/torch-mlir/pull/1454) MERGED [torch.prim.NumToTensor.Scalar + torch.valsem.aten.copy + torch.aten.to.dtype](https://github.com/llvm/torch-mlir/pull/1457) 3 op...
**The torch.constant.int error just means there are aten ops that use this torch.constant.int as operand haven’t been lowered successfully by your lowering code.** You need to find the op that...
Here is the link to useful [torch_mlir_debug_command.txt](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2#file-torch_mlir_debug_command-txt) that will be helpful for boot camp and refer later in development of torch-mlir.
Got error with torch-mlir 20221228 ``` Traceback (most recent call last): File "/home/chi/src/ubuntu20/shark/SHARK/tank/pytorch/gpt2/gpt2tosa.py", line 91, in module = torch_mlir.compile( File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/__init__.py", line 371, in compile run_pipeline_with_repro_report( File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 73,...