parseq icon indicating copy to clipboard operation
parseq copied to clipboard

ONNX Inference Decoding

Open rafaelagrc opened this issue 3 years ago • 3 comments

Hello,

I have converted the 'PARSeq' model from the Torch Hub to the ONNX format. I would like to ask if anyone has done inference and decoding with the ONNX model, as one cannot use the tokenizer.decode() function for this purpose.

rafaelagrc avatar Dec 13 '22 17:12 rafaelagrc

In general, onnx can transform the model, but preprocessing and postprocessing does not. tokenizer.decode is post-processing outside of the model, so it is not converted along with the model. In my case, I just implemented it separately because the code is simple and there is no advantage of using an accelerator.

dankernel avatar Dec 16 '22 16:12 dankernel

@dankernel can you share your implementation of tokenizer outside of the model please?

Shivanshmundra avatar Jan 16 '23 09:01 Shivanshmundra

I found the torch version of parseq which is able to convert onnx and tensorrt too https://github.com/bharatsubedi/PARseq_torch

WongVi avatar Mar 06 '23 07:03 WongVi