R torch for Apple Silicon M1 chips
Hi all, I'm using the apple M1 chip but I found R torch does not work, while the pytorch installed via conda works. I was wondering if it possible to link to the libtorch installed by python, or have native installation in R? Thanks
Hi @JingyuHe - you can use the python install via the package reticulate. The code below might be a good start, and the following cheat sheet could help you integrate the python install to your R code while R torch is not transposed to M1 chips.
> library(reticulate)
> virtualenv_create("venv_to_test_torch")
> use_virtualenv("vena_to_test_torch")
> py_install("torch")
> torch <- import("torch")
When using the code above, you can access the modules and classes from python torch via the $ operator, such as in:
> f <- torch$nn$functional
Are you using the M1 or intel installation of R?
I don't have a M1 mac for testing, but AFAICT, running torch via rosetta2 should work fine. Installing with M1 support would requires building both LibTorch and lantern from source.
@dfalbel Are you aware of a guide that explains how to do so and link it correctly to the R Torch library?
Are you using the M1 or intel installation of R?
I don't have a M1 mac for testing, but AFAICT, running torch via rosetta2 should work fine. Installing with M1 support would requires building both LibTorch and lantern from source.
I'm using M1 installation of R. I think the main problem is that install_torch() will install the X86 build torch and latern by default. I wonder if it is possible to install them by building from source in R torch?
It would be great if we were able to install torch using the M1 ARM arch. I don't know if it is something that is possible at the moment? I had the same issues as @JingyuHe while installing on an M1, I was able to run torch through rosetta2 in the end.
To answer @gl-007, to be able to install the x86_64 version of R we need to first install rosetta2. We can do that by running the following command in the terminal:
softwareupdate --install-rosetta
Then we can install the Intel version of R from https://cran.r-project.org/bin/macosx/.
After that, when you open Rstudio, the console should show Platform: x86_64-apple-..., if it is the case you can follow by installing torch normally via install.packages.
We’re excited to announce support for GPU-accelerated PyTorch training on Mac! Now you can take advantage of Apple silicon GPUs to perform ML workflows like prototyping and fine-tuning. Learn more: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
That's exciting! So maybe native M1 support is/will be possible? (Also relevant to #455)
Yeah, I think this has two consequences:
- supporting M1 natively will have a 5x performance increase - so we have greater incentives for doing it.
- it seems that it will also work with intel and AMD GPU's on macs (since it's done via the Metal API). So, not only owners of new mac's will see the benefits. (not sure this is true as it's not mentioned in their blogpost, but Metal does support the other gpu's as well as M1 GPU's)
I plan to work on this soon, but probably only after the official PyTorch release (currently it seems that it's a 1.12 alpha).
Hi @dfalbel, just an update on the issue of the Metal PyTorch working on AMD GPUs. Apparently that is not the case - I did a (admittedly quick) test and it was not possible to make it run. Apparently it is targeted to Apple silicon only.
Hi @dkgaraujo ! I am not sure if they are going to enable it for the released version but it should be possible if compiling PyTorch from source as per this answer in the forum.
I wonder is there any update on the Mac M1 chips? I have a try recently and the error message is
library(torch)
trying URL 'https://download.pytorch.org/libtorch/cpu/libtorch-macos-1.11.0.zip'
Content type 'application/zip' length 151690664 bytes (144.7 MB)
==================================================
downloaded 144.7 MB
trying URL 'https://storage.googleapis.com/torch-lantern-builds/refs/heads/cran/v0.8.0/latest/macOS-cpu.zip'
Content type 'application/zip' length 2916327 bytes (2.8 MB)
==================================================
downloaded 2.8 MB
Warning message:
Failed to install Torch, manually run install_torch().
/opt/homebrew/lib/R/4.2/site-library/torch/lib/liblantern.dylib - dlopen(/opt/homebrew/lib/R/4.2/site-library/torch/lib/liblantern.dylib, 0x000A): tried: '/opt/homebrew/lib/R/4.2/site-library/torch/lib/liblantern.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
It looks that the problem happens at installing LibLantern. The package only ships compiled version. I wonder if it possible (or how) to build it from source code. Thank you!
Hi @dkgaraujo ! I am not sure if they are going to enable it for the released version but it should be possible if compiling PyTorch from source as per this answer in the forum.
Thank you for the reference, @dfalbel - that's very helpful!
Basic support has been implemented in #890.
After installing the main branch from GitHub, I am seeing errors like these:
library(torch)
#> Warning: ℹ torch failed to start, restart your R session to try again.
#> ℹ You might need to reinstall torch using `install_torch()`
#> ✖ /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/liblantern.dylib
#> -
#> dlopen(/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/liblantern.dylib,
#> 0x000A): Library not loaded: '/opt/homebrew/opt/libomp/lib/libomp.dylib'
#> Referenced from:
#> '/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/libtorch_cpu.dylib'
#> Reason: tried: '/opt/homebrew/opt/libomp/lib/libomp.dylib' (no such file),
#> '/Library/Frameworks/R.framework/Resources/lib/libomp.dylib' (no such file),
#> '/Library/Java/JavaVirtualMachines/jdk-17.0.1+12/Contents/Home/lib/server/libomp.dylib'
#> (no such file)
install_torch()
#> Error in cpp_lantern_init(file.path(install_path(), "lib")): /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/liblantern.dylib - dlopen(/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/liblantern.dylib, 0x000A): Library not loaded: '/opt/homebrew/opt/libomp/lib/libomp.dylib'
#> Referenced from: '/Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/libtorch_cpu.dylib'
#> Reason: tried: '/opt/homebrew/opt/libomp/lib/libomp.dylib' (no such file), '/Library/Frameworks/R.framework/Resources/lib/libomp.dylib' (no such file), '/Library/Java/JavaVirtualMachines/jdk-17.0.1+12/Contents/Home/lib/server/libomp.dylib' (no such file)
Created on 2022-10-04 with reprex v2.0.2
Do you have any advice for what I should try?
A quick fix is to install libomp from homebrew:
brew install libomp
But that shouldn't be necessary as we include a copy of libomp.dylib in the package. Can you check if /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/ contains a libomp.dylib file?
Yep, sure does:
$ ls /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library/torch/lib/
cmake libprotoc.a
libXNNPACK.a libpthreadpool.a
libc10.dylib libpytorch_qnnpack.a
libclog.a libshm.dylib
libcpuinfo.a libsleef.a
libkineto.a libtorch.dylib
liblantern.dylib libtorch_cpu.dylib
libnnpack.a libtorch_global_deps.dylib
libomp.dylib libtorch_python.dylib
libprotobuf-lite.a pkgconfig
libprotobuf.a python3.9
Let me know if you want to move this info to the new issue.
Thank you @juliasilge ! The current dev version should work now, as we no longer link to openMP.
Yes, this now works for me! 🙏