tesseract icon indicating copy to clipboard operation
tesseract copied to clipboard

Feature Request: Dequantization - convertion of int model to float model

Open Shreeshrii opened this issue 3 years ago • 4 comments

@stweil You had mentioned at one point that it should be possible to finetune fast models. It will be useful to have this feature as many fast models use a smaller network size compared to best models, and hence the finetuned models would also be faster.

Is this something that can be included in 5.0.0?

Shreeshrii avatar Nov 27 '21 05:11 Shreeshrii

I am afraid that would delay 5.0.0 further as I don't have the time to implement that until the end of November.

stweil avatar Nov 27 '21 08:11 stweil

Technically the code would read the fast model with integer parameters and convert those parameters to float (or double) to get a best model which is then used for training.

stweil avatar Nov 27 '21 09:11 stweil

+1 for this feature request.

After 5.0.0...

amitdo avatar Nov 28 '21 16:11 amitdo

The term for converting from int model to float model is Dequantization / Dequantize.

https://www.tensorflow.org/api_docs/python/tf/quantization/dequantize

https://pytorch.org/docs/stable/generated/torch.dequantize.html

amitdo avatar Aug 10 '22 11:08 amitdo