Chi_Liu
Chi_Liu
> Yes. After replacing with rank, everything looks good. ``` indexingMaps.push_back(AffineMap::get( /*dimCount=*/rank, /*symbolCount=*/0, affineExprs, rewriter.getContext())); ```
llvm to use [https://github.com/llvm/llvm-project/compare/main...AmosLewis:llvm-project:tosa-to-linalg](https://github.com/llvm/llvm-project/compare/main...AmosLewis:llvm-project:tosa-to-linalg) llvm patch waiting for review and merge : [https://reviews.llvm.org/D135343](https://reviews.llvm.org/D135343)
Need this merged first. [build: update llvm tag to 438e5918 #1475](https://github.com/llvm/torch-mlir/pull/1475) Or this [build: update llvm tag to d325d2b #1483](https://github.com/llvm/torch-mlir/pull/1483)
> Need this merged first. [build: update llvm tag to 438e5918 #1475](https://github.com/llvm/torch-mlir/pull/1475) Or this [build: update llvm tag to d325d2b #1483](https://github.com/llvm/torch-mlir/pull/1483) @ramiro050 Could you help uplift llvm to d325d2b?
Finish rebase and uplift the llvm to https://github.com/llvm/torch-mlir/pull/1502. But got a new build-test out-of-tree bug which is not from my patch. ``` pip installing Pytorch.. ERROR: Invalid requirement: '/main_checkout/torch-mlir/build_tools/../build_tools/python_deploy/wheelhouse/*' Hint:...
> SGTM! Could you add me as a LLVM contributor, so I can PR my code?
``` ➜ SHARK git:(bloom-test) ✗ pytest tank/test_models.py ======================================= test session starts ======================================= platform linux -- Python 3.10.6, pytest-7.1.2, pluggy-1.0.0 -- /home/chi/src/ubuntu20/shark/SHARK/shark.venv/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/chi/src/ubuntu20/shark/SHARK/.hypothesis/examples') rootdir: /home/chi/src/ubuntu20/shark/SHARK,...
pytest tank/test_models.py -s --benchmark will got this bug. Looks like we need to add "tm_tensor " support for benchmark? @monorimet @dan-garvey ``` pytest tank/test_models.py -s --benchmark _ _ _ _...
For now, we just skip the bloom benchmark by ``` if config["model_name"] == "bigscience/bloom-560m" and self.module_tester.benchmark == True: pytest.skip(reason="tm_tensor dialect in bloom not supported for benchmark.") ``` But this bug...
The vulkan is also not supported. Right now, we mark it xfail. [https://github.com/nod-ai/SHARK/issues/380](https://github.com/nod-ai/SHARK/issues/380) ``` if config["model_name"] == "bigscience/bloom-560m" and device == "vulkan": pytest.xfail( reason="vulkan not supported with tm_tensor in bloom,...