yolov7-detect-face-onnxrun-cpp-py
yolov7-detect-face-onnxrun-cpp-py copied to clipboard
OnnxRunTime Problem
When I run the main.cpp in Colab with "!g++ /content/yolov7-detect-face-onnxrun-cpp-py/main.cpp -o cv -I/usr/include/opencv4 -I/usr/local/include/onnxruntime/", I got the errors below. What can be the problem and the solution?
/content/yolov7-detect-face-onnxrun-cpp-py/main.cpp: In constructor ‘YOLOV7_face::YOLOV7_face(Net_config)’:
/content/yolov7-detect-face-onnxrun-cpp-py/main.cpp:76:64: error: no matching function for call to ‘Ort::Session::Session(Ort::Env&, const wchar_t*, Ort::SessionOptions&)’
76 | ort_session = new Session(env, widestr.c_str(), sessionOptions);
| ^
In file included from /usr/local/include/onnxruntime/onnxruntime_cxx_api.h:1876,
from /content/yolov7-detect-face-onnxrun-cpp-py/main.cpp:7:
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:950:8: note: candidate: ‘Ort::Session::Session(const Ort::Env&, const void*, size_t, const Ort::SessionOptions&, OrtPrepackedWeightsContainer*)’
950 | inline Session::Session(const Env& env, const void* model_data, size_t model_data_length,
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:950:8: note: candidate expects 5 arguments, 3 provided
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:946:8: note: candidate: ‘Ort::Session::Session(const Ort::Env&, const void*, size_t, const Ort::SessionOptions&)’
946 | inline Session::Session(const Env& env, const void* model_data, size_t model_data_length, const SessionOptions& options) {
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:946:8: note: candidate expects 4 arguments, 3 provided
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:941:8: note: candidate: ‘Ort::Session::Session(const Ort::Env&, const char*, const Ort::SessionOptions&, OrtPrepackedWeightsContainer*)’
941 | inline Session::Session(const Env& env, const ORTCHAR_T* model_path, const SessionOptions& options,
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:941:8: note: candidate expects 4 arguments, 3 provided
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:937:8: note: candidate: ‘Ort::Session::Session(const Ort::Env&, const char*, const Ort::SessionOptions&)’
937 | inline Session::Session(const Env& env, const ORTCHAR_T* model_path, const SessionOptions& options) {
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_inline.h:937:58: note: no known conversion for argument 2 from ‘const wchar_t*’ to ‘const char*’
937 | inline Session::Session(const Env& env, const ORTCHAR_T* model_path, const SessionOptions& options) {
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~
In file included from /content/yolov7-detect-face-onnxrun-cpp-py/main.cpp:7:
/usr/local/include/onnxruntime/onnxruntime_cxx_api.h:783:12: note: candidate: ‘Ort::Session::Session(std::nullptr_t)’
783 | explicit Session(std::nullptr_t) {} ///< Create an empty Session object, must be assigned a valid one to be used
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_api.h:783:12: note: candidate expects 1 argument, 3 provided
/usr/local/include/onnxruntime/onnxruntime_cxx_api.h:782:8: note: candidate: ‘Ort::Session::Session(Ort::Session&&)’
782 | struct Session : detail::SessionImpl<OrtSession> {
| ^~~~~~~
/usr/local/include/onnxruntime/onnxruntime_cxx_api.h:782:8: note: candidate expects 1 argument, 3 provided
/content/yolov7-detect-face-onnxrun-cpp-py/main.cpp:82:38: error: ‘struct Ort::Session’ has no member named ‘GetInputName’
82 | input_names.push_back(ort_session->GetInputName(i, allocator));
| ^~~~~~~~~~~~
/content/yolov7-detect-face-onnxrun-cpp-py/main.cpp:90:39: error: ‘struct Ort::Session’ has no member named ‘GetOutputName’
90 | output_names.push_back(ort_session->GetOutputName(i, allocator));
| ^~~~~~~~~~~~~
I also tried with CMAKE, but got same errors.
When I was compiling, I also encountered the same error, did you solve this problem?
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
I have changed my onnxruntime version into 1.8.1 but still have problems in error: no matching function for call to ‘Ort::Session::Session(Ort::Env&, const wchar_t*, Ort::SessionOptions&)’ 76 | ort_session = new Session(env, widestr.c_str(), sessionOptions);
When I was compiling, I also encountered the same error, did you solve this problem?
I have solved this problem by using onnxruntime version 1.8.1 and compiling it successfully
I have changed my onnxruntime version into 1.8.1 but still have problems in error: no matching function for call to ‘Ort::Session::Session(Ort::Env&, const wchar_t*, Ort::SessionOptions&)’ 76 | ort_session = new Session(env, widestr.c_str(), sessionOptions);
string model_path = config.modelpath; // std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); // OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions);
When you change widestr to model_path,and then delete std::wstring widestr = std::wstring(model_path.begin(), model_path.end());,finally you can compile it
When you change widestr to model_path,and then delete
std::wstring widestr = std::wstring(model_path.begin(), model_path.end());,finally youcan compile it
I modify my code like this string model_path = config.modelpath; //std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); //OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions); an new erroe occur : corrupted size vs. prev_size 已中止 (核心已转储)
onnxruntime用了1.8.1版本后,我就改了这一个地方,其余地方都没改。我的编译脚本是g++ main.cpp -o demo.out -lonnxruntime \ -I/media/nie/D/soft/onnxruntime-linux-x64-1.8.1/include \ -L/media/nie/D/soft/onnxruntime-linux-x64-1.8.1/lib \ pkg-config --cflags --libs opencv4``
YOLOV7_face::YOLOV7_face(Net_config config) { this->confThreshold = config.confThreshold; this->nmsThreshold = config.nmsThreshold;
string model_path = config.modelpath;
// std::wstring widestr = std::wstring(model_path.begin(), model_path.end());
//OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);
sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC);
ort_session = new Session(env, model_path.c_str(), sessionOptions);
size_t numInputNodes = ort_session->GetInputCount();
size_t numOutputNodes = ort_session->GetOutputCount();
AllocatorWithDefaultOptions allocator;
for (int i = 0; i < numInputNodes; i++)
{
input_names.push_back(ort_session->GetInputName(i, allocator));
Ort::TypeInfo input_type_info = ort_session->GetInputTypeInfo(i);
auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo();
auto input_dims = input_tensor_info.GetShape();
input_node_dims.push_back(input_dims);
}
for (int i = 0; i < numOutputNodes; i++)
{
output_names.push_back(ort_session->GetOutputName(i, allocator));
Ort::TypeInfo output_type_info = ort_session->GetOutputTypeInfo(i);
auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo();
auto output_dims = output_tensor_info.GetShape();
output_node_dims.push_back(output_dims);
}
this->inpHeight = input_node_dims[0][2];
this->inpWidth = input_node_dims[0][3];
}
YOLOV7_face::YOLOV7_face(Net_config config) { this->confThreshold = config.confThreshold; this->nmsThreshold = config.nmsThreshold;
string model_path = config.modelpath; // std::wstring widestr = std::wstring(model_path.begin(), model_path.end()); //OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0); sessionOptions.SetGraphOptimizationLevel(ORT_ENABLE_BASIC); ort_session = new Session(env, model_path.c_str(), sessionOptions); size_t numInputNodes = ort_session->GetInputCount(); size_t numOutputNodes = ort_session->GetOutputCount(); AllocatorWithDefaultOptions allocator; for (int i = 0; i < numInputNodes; i++) { input_names.push_back(ort_session->GetInputName(i, allocator)); Ort::TypeInfo input_type_info = ort_session->GetInputTypeInfo(i); auto input_tensor_info = input_type_info.GetTensorTypeAndShapeInfo(); auto input_dims = input_tensor_info.GetShape(); input_node_dims.push_back(input_dims); } for (int i = 0; i < numOutputNodes; i++) { output_names.push_back(ort_session->GetOutputName(i, allocator)); Ort::TypeInfo output_type_info = ort_session->GetOutputTypeInfo(i); auto output_tensor_info = output_type_info.GetTensorTypeAndShapeInfo(); auto output_dims = output_tensor_info.GetShape(); output_node_dims.push_back(output_dims); } this->inpHeight = input_node_dims[0][2]; this->inpWidth = input_node_dims[0][3];}
非常感谢!已经能跑通了但用我训练导出的onnx模型会报onnx版本的问题,我想问问这是官方export.py的问题吗?