pix2pixHD
pix2pixHD copied to clipboard
export to Onnx is missing inputs, can't run inference
This issue is blocking me from running inference on my trained model.
When running inference from an ONNX exported file (I exported to ONNX by adding --export_onnx flag to testing) the inputs seem to be missing and I get an error, so
input_name = sess.get_inputs()[0].name print("input name", input_name)
results in input_name = sess.get_inputs()[0].name IndexError: list index out of range
However, when I check out the model with
import onnx
model_path = 'ONNX-model.onnx' onnx_model = onnx.load(model_path)
Check the model
try: onnx.checker.check_model(onnx_model) except onnx.checker.ValidationError as e: print('The model is invalid: %s' % e) else: print('The model is valid!')
output = onnx_model.graph.output
input_all = [node.name for node in onnx_model.graph.input] input_initializer = [node.name for node in onnx_model.graph.initializer] net_feed_input = list(set(input_all) - set(input_initializer))
print('Inputs: ', net_feed_input) print('Outputs: ', output)
I get:
/usr/local/bin/python3.8 /Users/myname/PycharmProjects/onnx/check_onnx.py The model is valid! Inputs: [] Outputs: [name: "214" type { tensor_type { elem_type: 1 shape { dim { dim_value: 1 } dim { dim_value: 3 } dim { dim_value: 1024 } dim { dim_value: 1024 } } } } ]
Note the inputs are missing.
Here's the begining of the Netron graph
Any help would be greatly appreciated!!!
I had the same problem.
I tried changing test.py
by adding inputs and outputs to torch.onnx.export
, but it still returns no input or output fields. Is there something I'm missing?
Still unresolved for me. Unfortunately, I'm running inference using test.py as a stop gap.
I had the same issues and used that fork: https://github.com/justinpinkney/pix2pixHD/commit/300305115a9ed0411579e2662afbc72851ba8f60
Now I can see in Netron graph with the exported ONNX model the input ("inp") and output ("214").
But I am not sure how to run inference on that ONNX model. I am trying with that code but I am stuck on that other error:
import onnxruntime as rt
import numpy as np
session = rt.InferenceSession("model.onnx")
img = np.array(Image.open("test.jpg"), dtype=np.float32)
inname = [input.name for input in session.get_inputs()]
outname = [output.name for output in session.get_outputs()]
inputs = {session.get_inputs()[0].name: img}
outs = session.run(outname, inputs)
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: inp Got: 3 Expected: 4 Please fix either the inputs or the model.
The third parameter is of type RunOptions and is None by default. I try to pass an empty one but it keeps giving the same error. Any idea ?
Same problem, looking for help
is anyone able to run it using the trt engine? or has successfully done the inference from the onnx model?
When you run inference on ONNX model, the input picture shape should as the same as the shape when you use torch.onnx.export convert .pth to .onnx. Of course, you can use dynamic_axes to have dynamic input and output.