Native Compatibility with cuda 12.9
Any plans on officially supporting cuda 12.9? For me personally, trying to install the pre-built binaries using the vignette instructions never work on my machine with cuda 12.9 and it keeps warning that the package does not support this cuda version even though the vignette says that the pre-built binary should handle this. Is there any possible workaround that is possible on the user side? Like setting env vars, compiling with different options, etc. I've had no problems with the nightly pytorch builds that support 12.9.
Unfortunatelly, LibTorch API and ABI is not backward compatible, so we always need to tweak our code before supporting a new version of LibTorch. This means that we can't really support nightly builds of LibTorch without tweaking our code everyday.
I wonder what are the problems you have when using the pre-built binaries, I'm happy to take a look if you can send the error/warning messages.
Also it should be possible to have multiple CUDA versions that coexist in the same machine as long as the installed driver supports the most recent one.
I followed the instructions on the main branch installation vignette as of commit 3f1bb59:
options(timeout = 600)
kind <- "cu128"
version <- available.packages()["torch","Version"]
options(repos = c(
torch = sprintf("https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version),
CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch")
But during installation I saw this warning:
Warning: unable to access index for repository https://torch-cdn.mlverse.org/packages/cu128/0.15.1/src/contrib:
cannot open URL 'https://torch-cdn.mlverse.org/packages/cu128/0.15.1/src/contrib/PACKAGES'
trying URL 'https://cloud.r-project.org/src/contrib/torch_0.15.1.tar.gz'
So I looked at the currently live docsite and I tried rerunning the code above with kind <- "cu124" but I saw the same warning. As expected, this causes the installation to fail when I run torch::install_torch() due to my CUDA version being 12.9.
I have no idea if its some super subtle issue with my system since I doubt the torch cdn itself is broken?
This is the output of my sessionInfo if it helps (note that I am on WSL2):
R version 4.5.1 (2025-06-13)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.2 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
locale:
[1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8
[4] LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8
[7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C
time zone: America/Los_Angeles
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_4.5.1
So you get a warning with?
options(timeout = 600)
kind <- "cu124"
version <- available.packages()["torch","Version"]
options(repos = c(
torch = sprintf("https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version),
CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch")
Saying sometyhing like this?
Warning: unable to access index for repository https://torch-cdn.mlverse.org/packages/cu128/0.15.1/src/contrib:
cannot open URL 'https://torch-cdn.mlverse.org/packages/cu128/0.15.1/src/contrib/PACKAGES'
trying URL 'https://cloud.r-project.org/src/contrib/torch_0.15.1.tar.gz'
We indeed don't have support for cuda 12.8 on the cran version, so you'd need version = "0.15.1.9000" if kind = "cu128". But kind="cu124"` should work. Can you paste the exact message you get?
FWIW I just tried on collab and it just works: https://colab.research.google.com/drive/1ZHbjjPcjQ2CXNKFRhMaEDjI4Z12m935B?usp=sharing
I tried kind = "cu128" and version = "0.15.1.9000" and the installation worked. I was able to make torch tensors but only on the CPU. When I tried device = "cuda" it did not work. I'm going to paste the error message a bit later here.
@dfalbel Here's the error:
> options(timeout = 600)
> kind <- "cu128"
> version <- "0.15.1.9000"
> options(repos = c(
torch = sprintf("https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version),
CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch")
Installing package into ‘/home/qile/R/x86_64-pc-linux-gnu-library/4.5’
(as ‘lib’ is unspecified)
trying URL 'https://torch-cdn.mlverse.org/packages/cu128/0.15.1.9000/src/contrib/torch_0.15.1.9000_R_x86_64-pc-linux-gnu.tar.gz'
Content type 'application/x-tar' length 3854852660 bytes (3676.3 MB)
==================================================
downloaded 3676.3 MB
* installing *binary* package ‘torch’ ...
* DONE (torch)
The downloaded source packages are in
‘/tmp/RtmpQSVO7a/downloaded_packages’
>
> torch::torch_tensor(1)
torch_tensor
1
[ CPUFloatType{1} ]
> torch::torch_tensor(1)$cuda()
Error in (function (self, device, dtype, non_blocking, copy, memory_format) :
CUDA error: no CUDA-capable device is detected
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.
Exception raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xb0 (0x7eec1da50de0 in /home/qile/R/x86_64-pc-linux-gnu-library/4.5/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xfa (0x7eec1d9de46e in /home/qile/R/x86_64-pc-linux-gnu-library/4.5/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_imp
as you can see, when I try to move a tensor to the gpu it says no cuda capable device is detected.
Interesting, could you paste the output of nvidia-smi in that machine?
Here is the output, ran in the same terminal session right after exiting out of the R interpreter that generated the error above.
$ nvidia-smi
Mon Aug 11 13:30:35 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.04 Driver Version: 577.00 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 5060 ... On | 00000000:01:00.0 Off | N/A |
| N/A 39C P2 10W / 70W | 0MiB / 8151MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+