wgpu icon indicating copy to clipboard operation
wgpu copied to clipboard

Applications using `wgpu` hang forever on bleeding edge Linux with Nvidia drivers 545.29.06 on GNOME / Wayland

Open udoprog opened this issue 7 months ago • 12 comments

Repro steps Running anything which tries to use wgpu Vulkan, like:

cd examples && cargo run cube

The window starts and renders at least one frame, but becomes completely non-interactive (windows can't be interacted with or moved) and you receive a "hanging" prompt from GNOME:

image

Note that I think this might legitimately be a platform issue, however:

  • I am unable to reproduce it with both vkcube (X11) and vkcube-wayland which reports and runs (see below).
  • And winit examples run without issues.
> sudo dnf install vulkan-tools
> vkcube-wayland
Selected GPU 0: NVIDIA GeForce RTX 2080 Ti, type: DiscreteGpu

Screencast from 2023-11-25 18-23-21.webm

So wgpu is currently the lowest level of abstraction I've chased down.

Platform

Log output from running the example:

wgpu_core::instance] Adapter Vulkan AdapterInfo { name: "NVIDIA GeForce RTX 2080 Ti", vendor: 4318, device: 7687, device_type: DiscreteGpu, driver: "NVIDIA", driver_info: "545.29.06", backend: Vulkan }

uname -r:

6.7.0-0.rc2.20231122gitc2d5304e6c64.23.fc40.x86_64

udoprog avatar Nov 25 '23 17:11 udoprog

This is probably a duplicate of #4689, but I'm just gonna add what I've found so far here:

This is where we hang forever:

https://github.com/gfx-rs/wgpu/blob/ebcfd25b58a2c4f3b442031d22b510576c1b8155/wgpu-hal/src/vulkan/instance.rs#L960

Out of curiosity, I added some instrumentation:

let fences = &[sc.fence];

unsafe {
    let status = sc.device.raw.get_fence_status(sc.fence)
        .map_err(crate::DeviceError::from)?;
    println!("wait: {}", status);
    sc.device.raw.wait_for_fences(fences, true, !0)
        .map_err(crate::DeviceError::from)?;
    sc.device.raw.reset_fences(fences).map_err(crate::DeviceError::from)?
}

It seems to hang (all though checking the status is racy) when the fence is not already signaled:

wait: true
... repeats a few hundred times
wait: true
wait: false

Note that vulkan-tools cube uses a semaphore for synchronization, so it seems like fences are buggy. And it's very likely a platform issue.

https://github.com/KhronosGroup/Vulkan-Tools/blob/62c4f8f7c546662aa5d43ca185e7d478d1224fb1/cube/cube.c#L1080

udoprog avatar Nov 25 '23 18:11 udoprog

This article also seems to suggest that using timeline semaphores are recommended over fences for host synchronization, so it might still be a worthwhile change in wgpu:

https://www.khronos.org/blog/vulkan-timeline-semaphores

udoprog avatar Nov 25 '23 18:11 udoprog

Using a semaphore works for me, all though the patch I wrote isn't pretty. Preferably it should be used to wait for when submitting a command buffer.

udoprog avatar Nov 25 '23 21:11 udoprog

Thanks for the investigation into this!

This article also seems to suggest that using timeline semaphores are recommended over fences for host synchronization, so it might still be a worthwhile change in wgpu:

Fences should still work. Either way you can't use timeline semaphores for swapchain stuff, you can only use binary semaphores. Does vkcube break if converted to wait for a fence?

This sounds like this is a driver bug and needs to be reported to nvidia.

cwfitzgerald avatar Nov 26 '23 03:11 cwfitzgerald

So this is the patch I'm using on vulkan-tools.

# Seems to be easier to install X11 dependencies than disable the build
> sudo dnf install libxcb-devel libX11-devel libXrandr-devel wayland-devel
cmake -S . -B build-release -D UPDATE_DEPS=ON -D BUILD_WERROR=ON -D BUILD_TESTS=ON -D CMAKE_BUILD_TYPE=Release
cmake --build build-release --config Release
./build-release/cube/vkcube-wayland

Note that this happen for both Release and Debug builds, I was using Release above in the hopes that I'd observe an unsignaled fence.

I'm currently not able to reproduce it with vkcube-wayland, but I'm also not able to observe an unsignaled fence:

> build-release/cube/vkcube-wayland
... lots of lines
before: 1
fence: 0

It's hard to say why. If someone has some other code they'd like me to run, I'd be happy to.

udoprog avatar Nov 26 '23 06:11 udoprog

Fences should still work. Either way you can't use timeline semaphores for swapchain stuff, you can only use binary semaphores. Does vkcube break if converted to wait for a fence?

This sounds like this is a driver bug and needs to be reported to nvidia.

Sounds good, any idea where?

In the meanwhile since I'm not super familiar with wgpu, is there something that necessitates using a fence? It's not entirely clear over my brief skim of the implementation if that is necessary over using a semaphore and waiting for that as we submit a command buffer?

udoprog avatar Nov 26 '23 06:11 udoprog

I am not familiar enough with Vulkan to know what the best thing to do here is, but the Nvidia driver does seem to be violating this rough guarantee of the Vulkan spec:

While we guarantee that vkWaitForFences must return in finite time, no guarantees are made that it returns immediately upon device loss. However, the client can reasonably expect that the delay will be on the order of seconds and that calling vkWaitForFences will not result in a permanently (or seemingly permanently) dead process.

So unless wgpu is violating the valid usages of the API (and thus triggered undefined behavior), calling vkWaitForFences shouldn't produce an indefinite hang like this.

So it seems fair to say this is at least partly a driver bug.

ids1024 avatar Jan 03 '24 17:01 ids1024

@ids1024 That the scenario cited is about what should happen during a device loss, which is something different from what happens here.

I don't know this for sure, but my current understanding is that the spec doesn't guarantee when the fence should be signaled, because the presentation engine might opt to hold onto the swapchain image for as long as it wants to. Which here seem to be up until a new frame is being submitted or presented. Android apparently does something like that so that it can use the swapchain image for things between render calls.

At least that is my conclusion from a careful read of the spec regarding the relevant functions. That doesn't mean Nvidia might not still be interested in fixing it. That being said, the vast majority of applications do what I've proposed in #4967 so we probably just want to do that as well to avoid problems.

udoprog avatar Jan 03 '24 18:01 udoprog

Ah, I guess the line above that says the "return in finite time" guarantee is about device loss, so there's no mentioned guarantee it won't block indefinitely in other circumstances.

ids1024 avatar Jan 03 '24 18:01 ids1024

Sounds good, any idea where?

See https://nvidia.custhelp.com/app/answers/detail/a_id/44/~/where-can-i-get-support-for-linux-drivers%3F

anarsoul avatar Jan 14 '24 04:01 anarsoul

I reported this issue direct to an nvidia linux driver dev

ryzendew avatar Jan 28 '24 20:01 ryzendew

for the record: Nvidia bug report by @RyzenDew https://forums.developer.nvidia.com/t/wgpu-driver-bug/280420

zocker-160 avatar Feb 09 '24 22:02 zocker-160

Hi all! This is supposedly fixed in Nvidia driver 550.67 as can be seen in the driver release notes, and I can confirm that it works with my personal project using wgpu.

krakow10 avatar Mar 25 '24 21:03 krakow10

I just checked (Gnome 46, Wayland, and Nvidia 550.67) and the problem is gone for me!

kaimast avatar Mar 25 '24 21:03 kaimast

sounds great! closing this as fixed then until we have new reason to believe otherwise

Wumpf avatar Mar 25 '24 22:03 Wumpf