pix2pixHD icon indicating copy to clipboard operation
pix2pixHD copied to clipboard

export to Onnx is missing inputs, can't run inference

Open Mercury-ML opened this issue 3 years ago • 6 comments

This issue is blocking me from running inference on my trained model.

When running inference from an ONNX exported file (I exported to ONNX by adding --export_onnx flag to testing) the inputs seem to be missing and I get an error, so

input_name = sess.get_inputs()[0].name print("input name", input_name)

results in input_name = sess.get_inputs()[0].name IndexError: list index out of range

However, when I check out the model with

import onnx

model_path = 'ONNX-model.onnx' onnx_model = onnx.load(model_path)

Check the model

try: onnx.checker.check_model(onnx_model) except onnx.checker.ValidationError as e: print('The model is invalid: %s' % e) else: print('The model is valid!')

output = onnx_model.graph.output

input_all = [node.name for node in onnx_model.graph.input] input_initializer = [node.name for node in onnx_model.graph.initializer] net_feed_input = list(set(input_all) - set(input_initializer))

print('Inputs: ', net_feed_input) print('Outputs: ', output)

I get:

/usr/local/bin/python3.8 /Users/myname/PycharmProjects/onnx/check_onnx.py The model is valid! Inputs: [] Outputs: [name: "214" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 3 } dim { dim_value: 1024 } dim { dim_value: 1024 } } } } ]

Note the inputs are missing. Here's the begining of the Netron graph netron-graph

Any help would be greatly appreciated!!!

Mercury-ML avatar May 17 '21 23:05 Mercury-ML

I had the same problem. I tried changing test.py by adding inputs and outputs to torch.onnx.export, but it still returns no input or output fields. Is there something I'm missing?

LucasCTN avatar Jun 09 '21 17:06 LucasCTN

Still unresolved for me. Unfortunately, I'm running inference using test.py as a stop gap.

Mercury-ML avatar Jun 09 '21 18:06 Mercury-ML

I had the same issues and used that fork: https://github.com/justinpinkney/pix2pixHD/commit/300305115a9ed0411579e2662afbc72851ba8f60

Now I can see in Netron graph with the exported ONNX model the input ("inp") and output ("214").

But I am not sure how to run inference on that ONNX model. I am trying with that code but I am stuck on that other error:

import onnxruntime as rt
import numpy as np

session = rt.InferenceSession("model.onnx")
img = np.array(Image.open("test.jpg"), dtype=np.float32)
inname = [input.name for input in session.get_inputs()]
outname = [output.name for output in session.get_outputs()]

inputs = {session.get_inputs()[0].name: img}
outs = session.run(outname, inputs)

InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: inp Got: 3 Expected: 4 Please fix either the inputs or the model.

The third parameter is of type RunOptions and is None by default. I try to pass an empty one but it keeps giving the same error. Any idea ?

mianor64 avatar Sep 13 '21 22:09 mianor64

Same problem, looking for help

ousinkou avatar Jun 16 '22 04:06 ousinkou

is anyone able to run it using the trt engine? or has successfully done the inference from the onnx model?

pradyumnjain avatar Jul 15 '22 11:07 pradyumnjain

When you run inference on ONNX model, the input picture shape should as the same as the shape when you use torch.onnx.export convert .pth to .onnx. Of course, you can use dynamic_axes to have dynamic input and output.

Cococyh avatar Jul 27 '22 03:07 Cococyh