Loader Settings: additional drivers with additional_drivers_use_exclusively=false doesn't allow to run additional drivers as GPU0
With device_configurations, we can list devices including additional drivers.
Devices:
========
GPU0:
apiVersion = 1.4.305
driverVersion = 0.0.1
vendorID = 0x10005
deviceID = 0x0000
deviceType = PHYSICAL_DEVICE_TYPE_CPU
deviceName = llvmpipe (LLVM 20.1.6, 256 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe
driverInfo = Mesa 25.0.7 (git-35721f1986) (LLVM 20.1.6)
[Vulkan Loader] DEBUG | LAYER: Unloading layer library C:\WINDOWS\System32\DriverStore\FileRepository\nv_dispig.inf_amd64_02bdcff75475bcf9\.\nvoglv64.dll
conformanceVersion = 1.3.1.1
deviceUUID = 6d657361-3235-2e30-2e37-000000000000
driverUUID = 6c6c766d-7069-7065-5555-494400000000
GPU1:
apiVersion = 1.3.271
driverVersion = 101.5186
vendorID = 0x8086
deviceID = 0x56a1
deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
deviceName = Intel(R) Arc(TM) A750 Graphics
driverID = DRIVER_ID_INTEL_PROPRIETARY_WINDOWS
driverName = Intel Corporation
driverInfo = Intel driver
conformanceVersion = 1.3.6.3
deviceUUID = 8680a156-0800-0000-0e00-000000000000
driverUUID = 33312e30-2e31-3031-2e35-313836000000
GPU2:
apiVersion = 1.3.295
driverVersion = 553.22.0.0
vendorID = 0x10de
deviceID = 0x2208
deviceType = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
deviceName = NVIDIA GeForce RTX 3080 Ti
driverID = DRIVER_ID_NVIDIA_PROPRIETARY
driverName = NVIDIA
driverInfo = 553.22
conformanceVersion = 1.3.8.2
deviceUUID = bc4a01b1-5641-8058-47a8-151a395a80c7
driverUUID = 1a8cb5ae-3ba9-590a-8ba4-a405828f4748
However, when putting the additional drivers first, as "GPU0", running vkcube shows the the GPU1 is used instead.
When moving a "typically" drivers first, then this typically drivers is used. In my case, this typical driver could be either my Nvidia or Intel GPU.
I think this is actually a consequence of https://github.com/KhronosGroup/Vulkan-Tools/pull/1047 which changes vkcube's gpu selection logic from "the first in the list" to "discrete, then integrated, then virtual, then CPU".
I am open to removing that logic from vkcube but that should be moved to Vulkan-Tools
@christophe-lunarg Can you specify which vkcube you used? IE which SDK version, that would rule out the problem where vkcube changed its logic.
Ah that could be! Rather than removing this logic, what about adding an argument to control that behavior?
I'll test tomorrow with an old version of vkcube!
Using this file, with vkcube from SDK 1.4.304.1, I can still reproduce the issue:
vk_loader_settings.json
Selected GPU 0: NVIDIA GeForce RTX 3080 Ti, type: DiscreteGpu
But maybe, I agreed it might not be a Loader issue but a vkcube issue that play around with device selection... this said, how does it manage to see my NVIDIA GPU when device_configurations only list llvmpipe? So it sounds like something is of with the loader too.
Running vulkaninfo, I am getting:
Devices:
========
GPU0:
apiVersion = 1.4.305
driverVersion = 0.0.1
vendorID = 0x10005
deviceID = 0x0000
deviceType = PHYSICAL_DEVICE_TYPE_CPU
deviceName = llvmpipe (LLVM 20.1.6, 256 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe
driverInfo = Mesa 25.0.7 (git-35721f1986) (LLVM 20.1.6)
conformanceVersion = 1.3.1.1
deviceUUID = 6d657361-3235-2e30-2e37-000000000000
driverUUID = 6c6c766d-7069-7065-5555-494400000000
On my side, I'll investigate how vkcube is working with device selection, maybe adding an argument to control or not the device selection.
If I use vkcube --gpu_number 0, even with the SDK 321, then I can run lavapipe. So there is definetly something going on with the GPU selection in vkcube things too.
It looks like the default behavior of vkcube is vkcube --gpu_number -1 which cause automatic selection. Because of the context of the SDK and this loader change, I think we could consider vkcube --gpu_number 0 to be the default, so changing vkcube. What would you think about such change?
This said, if we can understand how vkcube manage to see physical devices not listed in device_configurations, it would likely make this change of default vkcube behavior pointless.
I'll continue debugging vkcube to figure this out...
Okkkkkk, something is terrible weird, now, I no longer reproduce the issue, I am getting vkcube to start with llvmpipe in vkconfig with the same vk_loader_settings.json than above. Oo