Unable to use CUDA provider
Environment (please complete the following information):
- OS/OS Version: Windows 11
- Source Version: main/v0.1.13
- Unity Version: 2021.3.27f1
Describe the bug I am unable to use the CUDA provider. I'm using the following code to try to use the CUDA provider.
var cudaProviderOptions = new OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["device_id"] = "0";
providerOptionsDict["gpu_mem_limit"] = "2147483648";
providerOptionsDict["arena_extend_strategy"] = "kSameAsRequested";
providerOptionsDict["cudnn_conv_algo_search"] = "DEFAULT";
providerOptionsDict["do_copy_in_default_stream"] = "1";
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
_sessionOptions = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
I get the exception
OnnxRuntimeException: [ErrorCode:Fail] CUDA execution provider is not enabled in this build.
Additional context
I'm referencing the com.github.asus4.onnxruntime assembly in my asmdef file. From what I gathered on the internet, there should probably be a com.github.asus4.onnxruntime.win-x64-gpu but I cannot find it.
Hey, is there something I can do temporarily to fix this? Or can you mention a tentative timeline for the fix?
Hi @Nilavazhagan, Thanks for reporting this. The CUDA provider is still in experimental status, and it has not been confirmed that it works in all environments.
Can you check the following steps:
- Follow ONNX Runtime document and install the corresponding version of CUDA.
- Confirm the python ORT library works with CUDA
- Check the com.github.asus4.onnxruntime.win-x64-gpu is in your
manifest.json
"com.github.asus4.onnxruntime.win-x64-gpu": "0.1.13",
Hi @asus4 I couldn't get to that unfortunately, but I verified that com.github.asus4.onnxruntime.win-x64-gpu is in my manifest.json.
So, I started trying the DirectML execution provider and I'm getting the following error
OnnxRuntimeException: [ErrorCode:RuntimeException] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(2792)\onnxruntime.dll!00007FF838C97CC7: (caller: 00007FF838C97A84) Exception(4) tid(71b8) 80070057 The parameter is incorrect.
Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess (System.IntPtr nativeStatus) (at Library/PackageCache/[email protected]/Runtime/NativeApiStatus.shared.cs:33)
Microsoft.ML.OnnxRuntime.InferenceSession.Init (System.Byte[] modelData, Microsoft.ML.OnnxRuntime.SessionOptions options, Microsoft.ML.OnnxRuntime.PrePackedWeightsContainer prepackedWeightsContainer) (at Library/PackageCache/[email protected]/Runtime/InferenceSession.shared.cs:1207)
Microsoft.ML.OnnxRuntime.InferenceSession..ctor (System.Byte[] model, Microsoft.ML.OnnxRuntime.SessionOptions options) (at Library/PackageCache/[email protected]/Runtime/InferenceSession.shared.cs:156)
I also tried your example project onnxruntime-unity-examples and I'm getting a different error in it:
OnnxRuntimeException: [ErrorCode:NotImplemented] Failed to find kernel for com.microsoft.FusedConv(1) (node:'Conv_0' ep:'DmlExecutionProvider'). Kernel not found
Microsoft.ML.OnnxRuntime.Unity.ImageInference`1[T]..ctor (System.Byte[] model, Microsoft.ML.OnnxRuntime.Unity.ImageInferenceOptions options) (at ./Library/PackageCache/[email protected]/Runtime/ImageInference.cs:58)
Microsoft.ML.OnnxRuntime.Examples.MobileOne..ctor (System.Byte[] model, Microsoft.ML.OnnxRuntime.Examples.MobileOne+Options options) (at Assets/Samples/MobileOne/MobileOne.cs:48)
MobileOneSample.Start () (at Assets/Samples/MobileOne/MobileOneSample.cs:33)
Is there a way to at least get Direct ML to work? That might be sufficient for our use case tbh.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.