tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

onnx model convert by tf2onnx can't work in c++

Open zx-lhb opened this issue 1 year ago • 3 comments

Describe the bug

hi, I used the tool successfully converted the saved_model to onnx model, but when I used the onnx model to do inference in cpp, it can't work, the program just exit with exception. When I test with onnx model converted by torch, it worked fine. Could you give any suggestion?Thank you! This is the program cout msg:

2023-05-18 14:21:33.2750436 [E:onnxruntime:, sequential_executor.cc:346 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Transpose node. Name:'StatefulPartitionedCall/attnGateVnet3d/conv3d_48/Conv3D__890' Status Message: CUDA error cudaErrorInvalidConfiguration:invalid configuration argument

and the c++ code:



#include <onnxruntime_cxx_api.h>
#include <onnxruntime_c_api.h>
#include <tensorrt_provider_factory.h>

#include<iostream>
using namespace std;

int main()
{
    
    wstring model_path = L"in272_20230518.onnx";
    
    Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "airway");
    Ort::SessionOptions session_options;
  
    OrtCUDAProviderOptions cuda_options{};
    session_options.AppendExecutionProvider_CUDA(cuda_options);

    Ort::AllocatorWithDefaultOptions allocator;   

    
    Ort::Session session(env, model_path.c_str(), session_options);
    

    Ort::MemoryInfo memory_info = Ort::MemoryInfo::CreateCpu(OrtAllocatorType::OrtArenaAllocator, OrtMemType::OrtMemTypeDefault);

    

    std::vector<const char*> input_node_names = { "input_1"};
    std::vector<const char*> output_node_names = { "pred" };

    std::vector<float> input_image_1(272*272*32,0);
    float* input_1 = input_image_1.data();
    

    cout << session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetElementType() << endl;;
    cout << input_image_1.size() << endl;

    std::vector<int64_t> input_shape{ 1,272,272,32,1 };
    std::vector<Ort::Value> input_tensors;
    input_tensors.push_back(
        Ort::Value::CreateTensor<float>(memory_info, input_1, 
            input_image_1.size(), input_shape.data(), input_shape.size()));

    std::vector<Ort::Value> output_tensors;

    output_tensors = session.Run(
        Ort::RunOptions{ nullptr },
        input_node_names.data(), //
        input_tensors.data(),     //input tensors
        input_tensors.size(),     //1
        output_node_names.data(), //
        output_node_names.size()); //1

	return 0;
}

Urgency

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): windows
  • TensorFlow Version: tf2.10
  • Python version: py3.9
  • ONNX version (if applicable, e.g. 1.11*): 1.13
  • ONNXRuntime version (if applicable, e.g. 1.11*): 1.14
  • ONNXRuntime-cpp-windows: 1.9

To Reproduce

Screenshots

Additional context

zx-lhb avatar May 18 '23 06:05 zx-lhb