CRNN-Pytorch2TensorRT-via-ONNX
CRNN-Pytorch2TensorRT-via-ONNX copied to clipboard
how to handle different length input when inference?
the generated onnx input width which is image lenght is fixed? Then how to handle different lenght charactors input prediction when using onnx or tensorrt inference?
the generated onnx input width which is image lenght is fixed? Then how to handle different lenght charactors input prediction when using onnx or tensorrt inference?
Thanks for your question. the crnn's input size itself is a fixed one (32, 100) if grayscale. the algorithm can fit a alternative receive field. for example, when the input image is long, the feature map at certain position can receive a feature represent more characters. therefore, the crnn can have good performance with different sizes. Hope my answer has some help for you.