CRNN-Pytorch2TensorRT-via-ONNX icon indicating copy to clipboard operation
CRNN-Pytorch2TensorRT-via-ONNX copied to clipboard

how to handle different length input when inference?

Open lucasjinreal opened this issue 4 years ago • 1 comments

the generated onnx input width which is image lenght is fixed? Then how to handle different lenght charactors input prediction when using onnx or tensorrt inference?

lucasjinreal avatar Jul 17 '20 08:07 lucasjinreal

the generated onnx input width which is image lenght is fixed? Then how to handle different lenght charactors input prediction when using onnx or tensorrt inference?

Thanks for your question. the crnn's input size itself is a fixed one (32, 100) if grayscale. the algorithm can fit a alternative receive field. for example, when the input image is long, the feature map at certain position can receive a feature represent more characters. therefore, the crnn can have good performance with different sizes. Hope my answer has some help for you.

YIYANGCAI avatar Jul 17 '20 14:07 YIYANGCAI