ONNX Inference Decoding
Hello,
I have converted the 'PARSeq' model from the Torch Hub to the ONNX format.
I would like to ask if anyone has done inference and decoding with the ONNX model, as one cannot use the tokenizer.decode() function for this purpose.
In general, onnx can transform the model, but preprocessing and postprocessing does not.
tokenizer.decode is post-processing outside of the model, so it is not converted along with the model.
In my case, I just implemented it separately because the code is simple and there is no advantage of using an accelerator.
@dankernel can you share your implementation of tokenizer outside of the model please?
I found the torch version of parseq which is able to convert onnx and tensorrt too https://github.com/bharatsubedi/PARseq_torch