onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Converted ONNX model works in Python but not in C++

Open darkcoder2000 opened this issue 3 years ago • 4 comments

I can load and use a model that has been converted from Pytorch to ONNX with Python ONNX runtime. But using the same model in C++ ONNX runtime is not working properly since it is giving me back strange output tensor shapes. I am not getting any error message.

In Python the output tensor look like this

No. of inputs : 1, No. of outputs : 3
0 Input name : actual_input, Input shape : [1, 3, 512, 512],     Input type  : tensor(float)
 0 Output name : output, Output shape : ['Concatoutput_dim_0', 4],     Output type  : tensor(float)
 1 Output name : 3174, Output shape : ['Gather3174_dim_0'],     Output type  : tensor(int64)
 2 Output name : 3173, Output shape : ['Gather3173_dim_0'],     Output type  : tensor(float)

But in C++ it looks like this

Input Node Name/Shape (1):
	images : 1x3x512x512
Output Node Name/Shape (3):
	3214 : -1x4
	3191 : -1
	3190 : -1

When I run the inference in C++ I am getting these shape back.

output_tensor_shape: 0x4
output_tensor_shape: 0
output_tensor_shape: 0

Here is the C++ code I am using

using namespace std;

//#include <onnxruntime_cxx_api.h>
#include <experimental_onnxruntime_cxx_api.h>
#include <array>
#include <cmath>
#include <algorithm>

#include <opencv2/opencv.hpp>
#include <iostream>
#include <sstream>
#include <vector>

using namespace cv;
using namespace Ort;

cv::Mat resizedImageBGR, resizedImageRGB, resizedImage, preprocessedImage;
std::vector<float> image_array;

	std::string model_file = "model.onnx";

	// onnxruntime setup
	Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "onnx-executor");
	Ort::SessionOptions session_options;
	Ort::Experimental::Session session = Ort::Experimental::Session(env, model_file, session_options);

	// print name/shape of inputs
	std::vector<std::string> input_names = session.GetInputNames();
	std::vector<std::vector<int64_t> > input_shapes = session.GetInputShapes();
	cout << "Input Node Name/Shape (" << input_names.size() << "):" << endl;
	for (size_t i = 0; i < input_names.size(); i++) {
		cout << "\t" << input_names[i] << " : " << print_shape(input_shapes[i]) << endl;
	}

	// print name/shape of outputs
	std::vector<std::string> output_names = session.GetOutputNames();
	std::vector<std::vector<int64_t> > output_shapes = session.GetOutputShapes();
	cout << "Output Node Name/Shape (" << output_names.size() << "):" << endl;
	for (size_t i = 0; i < output_names.size(); i++) {
		cout << "\t" << output_names[i] << " : " << print_shape(output_shapes[i]) << endl;
	}

	//Create input tensor

	Mat image;
	image = imread("/TestPic.jpg");
	Mat imageResized;
	resize(image, imageResized, Size(512, 512), INTER_LINEAR);

	std::vector<Ort::Value> input_tensors;
	input_tensors.push_back(
			Ort::Experimental::Value::CreateTensor<float>(imageResized.ptr<float>(0),
					512 * 512 * 3 * sizeof(float), input_shapes[0]));

	try {
		auto output_tensors = session.Run(session.GetInputNames(), input_tensors, session.GetOutputNames());
		cout << "done" << endl;

		// double-check the dimensions of the output tensors
		// NOTE: the number of output tensors is equal to the number of output nodes specifed in the Run() call
		assert(output_tensors.size() == session.GetOutputNames().size() && output_tensors[0].IsTensor());
		cout << "output_tensor_shape: " << print_shape(output_tensors[0].GetTensorTypeAndShapeInfo().GetShape())
				<< endl;
		cout << "output_tensor_shape: " << print_shape(output_tensors[1].GetTensorTypeAndShapeInfo().GetShape())
				<< endl;
		cout << "output_tensor_shape: " << print_shape(output_tensors[2].GetTensorTypeAndShapeInfo().GetShape())
				<< endl;
        }

System information

OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20 ONNX Runtime installed from (source or binary): binary ONNX Runtime version: 1.9.0 and 1.11.1 Python version: 3.7.11 Visual Studio version (if applicable): none GCC/Compiler version (if compiling from source): none CUDA/cuDNN version: none GPU model and memory: none

To Reproduce

Unfortunatelly, the model is too big to share it here.

darkcoder2000 avatar Jun 07 '22 10:06 darkcoder2000

The implementation behind python and C++ interfaces is the same. You will need to check for bugs in your code. For example, you are passing metadata as shapes, instead of the actual shape. Metadata may contain -1 as shape designators, not to mention other things.

You are feeding 512 * 512 * 3 * sizeof(float) which is a size in bytes, but the overload you use to create the tensor is likely expecting the number of elements. Also, the image resize is using 512x512, how does it agree with 512 * 512 * 3` ?

yuslepukhin avatar Jun 08 '22 16:06 yuslepukhin

Is there a C++ reference implementation available that I can follow? The code I am using is working perfectly fine for another ObjectDetection ONNX model (1 input, 4 outpus tensors). The implementation I am using is following the example from here https://github.com/microsoft/onnxruntime-inference-examples/blob/main/c_cxx/model-explorer/model-explorer.cpp

"For example, you are passing metadata as shapes, instead of the actual shape." Not sure what you mean with metadata. I am extracting the input/ouput tensor shapes from the model so that I know what shapes the model is expecting.

Images are in RGB format which need 3 bytes for each pixel. Thats why it fits with 512 * 512 * 3. Also when the input tensor size doesn't fit then there will be an error message.

The problem I am seeing is that one ONNX model for ObjectDetection is working in Python and C++ (using the posted c++ code) and another ONNX model for ObjectDetection is working in python but not in C++(using the posted c++ code). And that's what this issue report is about.

I am wondering why this is the case and unfortunatelly there is only very little example code showing how to properly use ONNX in C++.

darkcoder2000 avatar Jun 13 '22 07:06 darkcoder2000

I am also facing the same issue , I have two models trained on yolov7 one of them works properly on both python and cpp

But, the other model works only on python , but it doesn't show anything on cpp and there is no error as you mentioned

Input and output parameter are same as well

By the have you found any solution for this?

Ramgade894 avatar Oct 17 '24 10:10 Ramgade894

It's not a solution more like a bypass but you can convert onnx file into tensor rt file by using tensorrt library in cpp but it will only work for GPU machine i think

Ramgade894 avatar Oct 20 '24 17:10 Ramgade894

Applying stale label due to no activity in 30 days