ONNX-Runtime-Inference
ONNX-Runtime-Inference copied to clipboard
C++ no instance of constructor matches the argument list argument types are: (Ort::Env, const char *, Ort::SessionOptions)
when i run inference.cpp, i get this error. Please guide me how to solve this.
Issue seems to be in line 199 of inference.cpp
Please indicate whether you are using the Docker container that I provided. Also please provide your command and all the information printed out in the terminal.
Hi, thanks for your reply, i am running your code on a visual studio 2019 project. I downloaded the onnxruntime library from https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1 and included the files in the visual studio project on windows. No matter what version I try, I am getting this error. I am not using the docker.
I could not help if the user was not using Docker container because I cannot easily reproduce the bug.
Thanks for your reply,, Do you know any resource of using onnxruntime in windows visual studio ?
@Adnan-annan you can check the build instructions for onnxruntime on windows from here: https://onnxruntime.ai/docs/how-to/build/inferencing.html
@Adnan-annan you should change
std::string modelFilepath{ "./data/models/squeezenet1.1-7.onnx" };
to
std::wstring modelFilepath{ L"./data/models/squeezenet1.1-7.onnx" };
when using vs2019.
The reason is the constructor function of Ort::Session session need type of wchat_t * for the second parameter.
and @leimao do you know the reason? Thanks.
Using ONNX Runtime 1.9.0 I was having couple of issue plus another related to unmatched constructor, both in VS2019 and in Qt Creator.
This is my solution for the session instance:
const wchar_t* modelFilepath = L"dir_to_file/model.onnx";
...
...
Ort::Session session(env, modelFilepath, sessionOptions);
^^^^^^^^^^^^^
And this for the device_id setting, I had to write
OrtCUDAProviderOptions cuda_options;
cuda_options.device_id = 0;
sessionOptions.AppendExecutionProvider_CUDA(cuda_options);
Instead of:
OrtCUDAProviderOptions cuda_options{0};
sessionOptions.AppendExecutionProvider_CUDA(cuda_options);
@giaxxi hello ,i get the error: provider_bridge_ort.cc:940 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "找不到指定的模块。" when trying to load "E:\WorkProject\wanqi\zepeng_code\v3\build-v3_demo-Desktop_Qt_5_14_0_MSVC2017_64bit-Release\release\onnxruntime_providers_cuda.dll", but the onnxruntime_providers_cuda.dll is exist, so did need to update my cuda version ,?
I changed the version of OnnxRuntime from 1.13.0 to 1.12.0 and this problem was solved, but another error was caused...
error message:
(Run in debug mode)
Error C2665 'Ort::Session::Session': no overloaded function could convert all the argument types
It seems like this error was caused here (in xutility -> line 231):
template <class _Ty, class... _Types>
_CONSTEXPR20 void _Construct_in_place(_Ty& _Obj, _Types&&... _Args) noexcept(
is_nothrow_constructible_v<_Ty, _Types...>) {
#if _HAS_CXX20
if (_STD is_constant_evaluated()) {
_STD construct_at(_STD addressof(_Obj), _STD forward<_Types>(_Args)...);
} else
#endif // _HAS_CXX20
{
::new (_Voidify_iter(_STD addressof(_Obj))) _Ty(_STD forward<_Types>(_Args)...);
}
}
Do you know how I can solve this?