tesseract
tesseract copied to clipboard
Feature Request: Dequantization - convertion of int model to float model
@stweil You had mentioned at one point that it should be possible to finetune fast
models. It will be useful to have this feature as many fast
models use a smaller network size compared to best
models, and hence the finetuned models would also be faster.
Is this something that can be included in 5.0.0?
I am afraid that would delay 5.0.0 further as I don't have the time to implement that until the end of November.
Technically the code would read the fast model with integer parameters and convert those parameters to float
(or double
) to get a best model which is then used for training.
+1 for this feature request.
After 5.0.0...
The term for converting from int model to float model is Dequantization
/ Dequantize
.
https://www.tensorflow.org/api_docs/python/tf/quantization/dequantize
https://pytorch.org/docs/stable/generated/torch.dequantize.html