powderluv
powderluv
yeah I _think_ delocate can't find the quant library that is opened at runtime ?? Anyway this is now moot since the builder builds a perfectly installable universal binary on...
going to leave it open since I may have just seen it on my Intel macOS
This seems to be because of the weak linking of torch symbols. https://github.com/pytorch/pytorch/issues/48452 In our package we have: ``` site-packages % find . -name '*.dylib' | grep torch ./torchvision/.dylibs/libz.1.2.11.dylib ./torchvision/.dylibs/libpng16.16.dylib...
**Workaroud** ``` # Replace mlir_venv with whatever your venv is cd mlir_venv/lib/python3.10/site-packages/torch_mlir/.dylibs rm *.dylib ln -s ../../torch/lib/libc10.dylib ln -s ../../torch/lib/libshm.dylib ln -s ../../torch/lib/libtorch.dylib ln -s ../../torch/lib/libtorch_cpu.dylib ln -s ../../torch/lib/libtorch_python.dylib ```
This is still an issue on x86 macOS builds. But we don't care about those builds right now so ok to leave it closed.
@oroppas if you get it to work and build cleanly we should probably add an FYI CI for Windows.
This is awesome !!! The Python bindings can be from the released version of pytorch right ? This way this integration can span various versions of PyTorch until there is...
Do you have a hard requirement for Ubuntu 18.04 ? The reason we moved to 22.04 in the CI is to get newer tools and python version by default
You could add Bazel as an option in https://github.com/llvm/torch-mlir/pull/1234 and then anyone building other CIs / Releases can also test Bazel builds locally via docker.
If we return true don't we always have to resolve all CUDA ops ?