burn
burn copied to clipboard
No adapter found for graphics API AutoGraphicsApi
Hi, I am trying to run burn according to the burn book example (https://burn.dev/book/getting-started.html), but it displayed "No adapter found for graphics API AutoGraphicsApi".
this is the Cargo.toml:
[package]
name = "my_burn_app"
version = "0.1.0"
edition = "2021"
[dependencies]
burn = { version = "0.13.0", features = ["wgpu"] }
And this is the main function:
use burn::tensor::Tensor;
use burn::backend::Wgpu;
// Type alias for the backend to use.
type Backend = Wgpu;
fn main() {
let device = Default::default();
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
let tensor_1 = Tensor::<Backend, 2>::from_data([[2., 3.], [4., 5.]], &device);
let tensor_2 = Tensor::<Backend, 2>::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
When I run it using cargo run, the following error occurred:
[wangjw@localhost my_burn_app]$ cargo run
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.23s
Running `target/debug/my_burn_app`
thread 'main' panicked at /home/wangjw/.cargo/registry/src/index.crates.io-6f17d22bba15001f/burn-wgpu-0.13.0/src/runtime.rs:278:17:
No adapter found for graphics API AutoGraphicsApi
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
For wgpu you might need to install the correct graphics drivers depending on what you want to target. By default, the AutoGraphicsApi targets metal on MacOs and vulkan on other operating systems. For vulkan you can do apt install vulkan (and possibly vulkan-tools if you want to validate the installation).
If you want to use another graphics API you can be explicit when using the wgpu backend. For example, the guide defines the backend as
type MyBackend = Wgpu<AutoGraphicsApi, f32, i32>;
But if you explicitly want to target OpenGl, you can use this instead
type MyBackend = Wgpu<OpenGl, f32, i32>;
with use burn::backend::wgpu::OpenGl;.
@antimora @laggui I tried but failed
use burn::backend::wgpu::OpenGl;
type MyBackend = Wgpu<OpenGl, f32, i32>;
displayed the following error:
here is the OpenGL in my centos:
[wangjw@localhost machine]$ glxinfo | grep "OpenGL"
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 7.0, 256 bits)
OpenGL version string: 2.1 Mesa 18.3.4
OpenGL shading language version string: 1.20
OpenGL extensions:
I can run PyTorch using CUDA on my system.
The API has changed since the initial reply 😅
To configure the wgpu backend to use a different runtime you need to initialize it (taken from the docs):
burn::backend::wgpu::init_sync::<burn::backend::wgpu::Vulkan>(&device, Default::default());
I tried it at yesterday, but it still didn't work. Could you tell me how to use OpenGl ?
Here is the code:
use burn::tensor::Tensor;
use burn::backend::wgpu::OpenGl;
use burn::backend::Wgpu;
// Type alias for the backend to use.
type MyBackend = Wgpu;
fn main() {
let device = Default::default();
println!("device {:?}", device);
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
burn::backend::wgpu::init_sync::<burn::backend::wgpu::OpenGl>(&device, Default::default());
let tensor_1 = Tensor::<MyBackend, 2>::from_data([[2., 3.], [4., 5.]], &device);
let tensor_2 = Tensor::<MyBackend, 2>::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
The same error occurred when I used OpenGl :
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.25s
Running `target/debug/machine`
device BestAvailable
thread 'main' panicked at /home/wangjw/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cubecl-wgpu-0.2.0/src/runtime.rs:314:17:
No adapter found for graphics API OpenGl
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Btw my initial response with an example to target a specific wgpu target (e.g., OpenGL) was not meant to say "you should use OpenGL")😅 It was just meant to illustrate that you could use a specific target to make sure it was targeted. OpenGL was just a specific example, but you could explicitly specify Vulkan instead.
For vulkan (default), did you try this?
For vulkan you can do apt install vulkan (and possibly vulkan-tools if you want to validate the installation).
Otherwise, regarding your OpenGL installation...
here is the OpenGL in my centos:
[wangjw@localhost machine]$ glxinfo | grep "OpenGL" OpenGL vendor string: VMware, Inc. OpenGL renderer string: llvmpipe (LLVM 7.0, 256 bits) OpenGL version string: 2.1 Mesa 18.3.4 OpenGL shading language version string: 1.20 OpenGL extensions:
This looks like an incorrect setup. Your installation doesn't seem to detect your GPU (llvmpipe is a software renderer, it should list your NVIDIA device instead - and the vendor should also be NVIDIA and not VMware..)
Oh, I understand. I haven't installed vulkan, opengl, metal ..... because I have not the root priviledge. My GPU is NVIDIA GeForce and I use cuda. Do you know how to use CUDA with burn?
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.57 Driver Version: 515.57 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:65:00.0 Off | N/A | | 30% 35C P0 106W / 350W | 0MiB / 24576MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
Ahhhh I see! In this case, you can use a different backend 🙂
We have our own cuda backend (enabled via cuda-jit feature flag), but also candle that supports cuda (enabled with the candle-cuda feature flag) and libtorch (enabled with tch and you need to set the TORCH_CUDA_VERSION=cu121 before building to fetch the gpu compatible version).
Our cuda backend is continually improving, but depending on the task (i.e., operations) libtorch might be the fastest overall at the time of writing.
Usage
cuda-jit
use burn::backend::{cuda_jit::CudaDevice, CudaJit};
type MyBackend = CudaJit;
let device = CudaDevice::default();
tch
use burn::backend::libtorch::{LibTorch, LibTorchDevice};
type MyBackend = LibTorch;
let device = LibTorchDevice::Cuda(0);
candle-cuda
use burn::backend::{Candle, candle::CandleDevice};
type MyBackend = Candle;
let device = CandleDevice::cuda(0)
Note: for candle-cuda I used the API which is only available on main (CandleDevice::cuda(0) instead of CandleDevice::Cuda(0)). We recently fixed a device issue with how candle handles device identifiers, so if you want to use candle I suggest switching to the latest (main) version.
Thanks, I tried cuda-jit, it seems that there are something wrong with our server configuration. The following error occurred:
Unable to find nvrtc lib under the names ["nvrtc", "nvrtc64", "nvrtc64_12", "nvrtc64_123", "nvrtc64_123_0", "nvrtc64_120_3", "nvrtc64_10"]
Here is my nvidia info:
(cellpy) [wangjw@localhost work]$ tree /home/wangjw/programs/miniconda3/envs/cellpy/lib/python3.11/site-packages/nvidia/cuda_nvrtc
/home/wangjw/programs/miniconda3/envs/cellpy/lib/python3.11/site-packages/nvidia/cuda_nvrtc
├── include
│ ├── __init__.py
│ ├── nvrtc.h
│ └── __pycache__
│ └── __init__.cpython-311.pyc
├── __init__.py
├── lib
│ ├── __init__.py
│ ├── libnvrtc-builtins.so.11.7
│ ├── libnvrtc-builtins.so.12.1
│ ├── libnvrtc.so.11.2
│ ├── libnvrtc.so.12
│ └── __pycache__
│ └── __init__.cpython-311.pyc
└── __pycache__
└── __init__.cpython-311.pyc
Ahhh that doesn't seem to be a standard installation. Probably it's missing from the PATH or LD_LIBRARY_PATH so cudarc cannot find it. I've seen issues in the past but these were specific to windows, which doesn't seem to be the case here. Just a custom install it looks.
Yes, It's a customized installation. Maybe we need a standard installation when we buy a new server.
I have the same issue; I spent hours on it. I tried to write a written version of CUDA but struggled also. Unfortunately, the docs don't have any examples of this..
@younes-io did you figure out the issue? What device are you trying to use with wgpu?
Perhaps a new issue could be opened to detail your problem, but I agree that trying to document common issues would be helpful. We haven't had the chance yet because most people actually have a different setup (and/or don't always remember the means to resolution).
@younes-io did you figure out the issue? What device are you trying to use with wgpu?
Perhaps a new issue could be opened to detail your problem, but I agree that trying to document common issues would be helpful. We haven't had the chance yet because most people actually have a different setup (and/or don't always remember the means to resolution).
I didn't. I have just abandoned burn for now because I couldn't implement a simple CUDA program (I tried to adapt the example in the burn docs, but it took me a lot of time already, and I felt like the docs weren't up-to-date).
By simple CUDA program, you mean like adapting the simple example below to use your NVIDIA gpu with the wgpu backend?
use burn::tensor::Tensor;
use burn::backend::Wgpu;
// Type alias for the backend to use.
type Backend = Wgpu;
fn main() {
let device = Default::default();
// Creation of two tensors, the first with explicit values and the second one with ones, with the same shape as the first
let tensor_1 = Tensor::<Backend, 2>::from_data([[2., 3.], [4., 5.]], &device);
let tensor_2 = Tensor::<Backend, 2>::ones_like(&tensor_1);
// Print the element-wise addition (done with the WGPU backend) of the two tensors.
println!("{}", tensor_1 + tensor_2);
}
Anyway, hopefully we can improve the special setups in the documentation to reduce friction. Would be great if you could flag what didn't work in an issue, but you have no obligation 🙂