Vulkan-Samples
Vulkan-Samples copied to clipboard
Headless mode crashes with API samples
./vulkan_samples render_passes --headless
works, while
./vulkan_samples instancing --headless
does not for example.
The crash appears to be at the end of ApiVulkanSample::submit_frame(). It calls device->get_queue_by_present(0) which fails because there are no queues that support 'present' since we're in headless mode.
Moving a few of the comments on #254 here to keep track of the conversation
@gpx1000
moving
device->get_queue_by_present(0)
into the if conditional check results in a fix to the crash.
@gary-sweet
Sure. This will also skip the waitIdle, and I wasn't sure if it was needed. It could use vkDeviceWaitIdle() instead I guess.
@gpx1000
I think it's fair to remove the wait idle completely. In the comment above it where synchronization is a typo (missing the h), it's correctly pointing out that proper synchronization in the samples is the real way to not have to do the expensive waitidle.
Given how often Synchronization is a pain point, maybe having more rigor in the samples at the cost of being more verbose isn't a bad idea? In other words, the samples themselves should be rewritten with an eye towards proper synchronization.
@SaschaWillems
As for the waitIdle, my samples still require it. They don't have proper per-frame resources yet, so they wouldn't work anymore. I'm working on updating my own samples to proper sync and want to update our API samples too, but that takes some time.
I took a closer look at this, and as it stands now, we can't remove the wait idle, so this needs to be handled in a different way. E.g. by doing a device wait idle instead. Though that's even worse than the queue wait idle.
As @gpx1000 noted, we should remove the wait idle all along and do proper synchronization with per-frame resources and fences. That's a larger rework though. I'm currently doing that for my own samples to get a feeling for this, and will then try to port that over to our repo.
@gary-sweet, are we OK to close this issue now, or is there more to do? Thanks.
As far as I'm aware nothing has been done to address this, so I don't think it can be closed.
OK, thanks. Closing - please re-open if needed.
I think you misread my comment. I said it can't be closed yet.
Oops, sorry, wishful reading I guess..