CRAFT-pytorch
CRAFT-pytorch copied to clipboard
About difference between python and c++ tensorrt7.0 when infering?
The model is imported into onnx, and converted into an engine file to infer on tensorrt7.0, which can be successfully inferred with trtexec, but the result of inference with the same input tensor into tensorrt7.0 is not the same as the inference result of the model in python. Any better solution about this?Thank you very much.
same me,, but my onnx model can have the same result in trt5 and trt6, trt7 is different
Hi, @CallmeZhangChenchen,
My inference at TensorRT 7.0 is also bad. How did you generate the ONNX model and test it in the trt6? I mean: did you use ONNX 1.5.0, Opset: 9.0, and onnx-tensorrt: 6.0 ? Thank you in advance.
Hi @tairen99
trt6 The way of use is the same as trt7 。
I didn't use onnx-trt, tensort comes with trtexec, you can try。
I feel you should determine what the problem is, then go to tensorrt to mention issuse
Hi, @CallmeZhangChenchen,
Thank you for your replay.
I could not use trtexec with the ONNX model that I generated due to the error "ERROR: builtin_op_importers.cpp:3271 In function importUpsample:". The trtexec command line is: "trtexec --onnx=craft_sim_9.onnx --dumpOutput --batch=1"
I was able to use the onnx-trt to convert this ONNX model into TRT engine file to walk around this issue, so I can test the model. The trtexec command line is: "trtexec --loadEngine=my_engine.trt --dumpOutput --batch=1 --safe --loadInputs=Input:imgbin.txt"
But the output from this execution prints all the information and tensors on the screen. Even though I can output this information to file, the file is around 200M. I do know how to grab the final output for verification.
Can you share some tricks to correctly use the trtexec for the model output verification?
Thank you!
The model is imported into onnx, and converted into an engine file to infer on tensorrt7.0, which can be successfully inferred with trtexec, but the result of inference with the same input tensor into tensorrt7.0 is not the same as the inference result of the model in python. Any better solution about this?Thank you very much.
Hello @DJMeng @CallmeZhangChenchen @tairen99 , I am also working on converting the pth model to onnx with a fixed input of 384*384. The onnx model can be successfully generated. However, the output of onnx is different from the original pth file. I think the problem may lies in the process of conversion. Could you help me with the problem? Many Thanks! My torch is 1.3.1 and onnx is 1.6.0 I attach my conversion code below:
net = CRAFT() # initialize
net = net.cuda()
#net = torch.nn.DataParallel(net)
net.load_state_dict(copyStateDict(torch.load('./weights/craft_mlt_25k.pth')))
net.eval()
# load data
image = imgproc.loadImage('./test_data/chi/0021_crop.jpg')
# resize
img_resized, target_ratio, size_heatmap = imgproc.resize_aspect_ratio(image, 384, interpolation=cv2.INTER_LINEAR, mag_ratio=1.5)
ratio_h = ratio_w = 1 / target_ratio
# preprocessing
x = imgproc.normalizeMeanVariance(img_resized)
x = torch.from_numpy(x).permute(2, 0, 1) # [h, w, c] to [c, h, w]
x = Variable(x.unsqueeze(0)) # [c, h, w] to [b, c, h, w]
onnx_input = x.data.numpy()
x = x.cuda()
# trace export
torch.onnx.export(net,
x,
'./craft.onnx',
export_params=True,
verbose=True)
# test the inference process
if 1:
session = onnxruntime.InferenceSession("./craft.onnx")
input_name = session.get_inputs()[0].name
print('\t>>input: {}, {}, {}'.format(session.get_inputs()[0].name, session.get_inputs()[0].shape, session.get_inputs()[0].type))
_outputs = session.get_outputs()
for kk in range(len(_outputs)):
_out = _outputs[kk]
#print('\t>>out-{}: {}, {}, {}'.format(kk, _out.name, _out.shape, _out.type))
_x = np.array(onnx_input).astype(np.float32)
p = session.run(None, {input_name: _x})
out1 = p[0]
print('============================================================================')
print('>>summary:')
print("onnx input:{}".format(_x))
print('onnx out: {} \n{}'.format(np.shape(out1), out1))