Results 78 comments of Chi_Liu

I encounter the same issue to lower the huggingface gpt2. [https://gist.github.com/AmosLewis/9b929414d5677afda3528122f92bbc73](https://gist.github.com/AmosLewis/9b929414d5677afda3528122f92bbc73) @sjarus error: failed to legalize operation 'torch.constant.int'

Traceback (most recent call last): File "/home/chi/src/ubuntu20/shark/SHARK/generate_sharktank.py", line 231, in save_torch_model(args.torch_model_csv) File "/home/chi/src/ubuntu20/shark/SHARK/generate_sharktank.py", line 83, in save_torch_model mlir_importer.import_debug( File "/home/chi/src/ubuntu20/shark/SHARK/shark/shark_importer.py", line 163, in import_debug imported_mlir = self.import_mlir( File "/home/chi/src/ubuntu20/shark/SHARK/shark/shark_importer.py", line...

%579 = torch.operator "**aten.split.Tensor**"(%578, %int64, %int-1) : (!torch.tensor, ![torch.int](http://torch.int/), ![torch.int](http://torch.int/)) -> !torch.list loc(#loc7) %620 = torch.operator "**aten.max.other**"(%616, %619) : (!torch.tensor, !torch.tensor) -> !torch.tensor loc(#loc15)

> %579 = torch.operator "**aten.split.Tensor**"(%578, %int64, %int-1) : (!torch.tensor, ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f), ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f)) -> !torch.list loc(#loc7) %620 = torch.operator "**aten.max.other**"(%616, %619) : (!torch.tensor, !torch.tensor) -> !torch.tensor loc(#loc15) [make_fx example](https://gist.github.com/JakopinA/668476fc358d81964288711115b4f285) add make_fix for...

> > %579 = torch.operator "**aten.split.Tensor**"(%578, %int64, %int-1) : (!torch.tensor, ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f), ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f)) -> !torch.list loc(#loc7) %620 = torch.operator "**aten.max.other**"(%616, %619) : (!torch.tensor, !torch.tensor) -> !torch.tensor loc(#loc15) > > [make_fx example](https://gist.github.com/JakopinA/668476fc358d81964288711115b4f285)...

> > > %579 = torch.operator "**aten.split.Tensor**"(%578, %int64, %int-1) : (!torch.tensor, ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f), ![torch.int](https://camo.githubusercontent.com/06814289c1cedaffc2f11fa0d4108965e7674bb173b73a82bc5ff9bca19900c3/687474703a2f2f746f7263682e696e742f)) -> !torch.list loc(#loc7) %620 = torch.operator "**aten.max.other**"(%616, %619) : (!torch.tensor, !torch.tensor) -> !torch.tensor loc(#loc15) > > >...

Traceback (most recent call last): File "/home/chi/src/ubuntu20/shark/SHARK/generate_sharktank.py", line 231, in save_torch_model(args.torch_model_csv) File "/home/chi/src/ubuntu20/shark/SHARK/generate_sharktank.py", line 83, in save_torch_model mlir_importer.import_debug( File "/home/chi/src/ubuntu20/shark/SHARK/shark/shark_importer.py", line 163, in import_debug imported_mlir = self.import_mlir( File "/home/chi/src/ubuntu20/shark/SHARK/shark/shark_importer.py", line...

``` Current thread 0x00007f8b57683740 (most recent call first):   File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 47 in run_pipeline_with_repro_report   File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/__init__.py", line 247 in compile   File "/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_utils.py", line 69 in get_torch_mlir_module   File "/home/chi/src/ubuntu20/shark/SHARK/shark/shark_importer.py", line 74...

https://gist.github.com/AmosLewis/fa8662eef03c3379bda6a3974036abd1 for test this op. https://github.com/nod-ai/SHARK/issues/338 to use this op.

In the test, it converts torch to tosa successfully. But why after that, it converts tosa to linalg? The tosa to linalg conversion arise the bug. @sjarus @eric-k256 ``` >...