tch-rs
tch-rs copied to clipboard
question: compiling tch-rs with static torch lib
tch-rs uses the torch's shared library. Is it possible to build tch-rs with static library? I have a use case where I want to bundle torch (WASM and iOS). It would
This would indeed be a nice thing to support, a first limitation is getting a statically linked version of libtorch, using this I feel that it wouldn't be very difficult to tweak the build script (and would also probably remove some hacks required by the shared library setup).
Interestingly https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-static-with-deps-2.0.0%2Bcpu.zip does exist (I just replaced the shared
in the original URL with static
) but this doesn't seem to contain the libtorch.a
file that I would have expected and instead have the shared object version.
I've looked at this into more details and put up #712 that has some support for static linking. It works fine on linux cpu version - I'm not sure it would help on wasm though as the libtorch library is in c++ but maybe there is some emscripten magic that can be applied there.
@antimora did you end up trying to compile it statically and run it on WASM?
@finnkauski no, I didn't. I am using Burn framework and now it supports WebGPU and Candle CPU backend which compilable to WASM . Here is an example I created https://github.com/tracel-ai/burn/tree/main/examples/image-classification-web