Darwin Bautista
Darwin Bautista
In my early experiments, I created and used a `Sampler` subclass which tried to cleanly implement the "batch-balanced" sampling of Baek et al. (`clovaai/deep-text-recognition-benchmark`): https://gist.github.com/baudm/fa08974319150c65caa96d6062b76aa9 This is how I used...
Yes. You could simply use the NAR branch of the inference code: https://github.com/baudm/parseq/blob/8fa51009088da67a23b44c9c203fde52ffc549e5/strhub/models/parseq/system.py#L135-L137 for training in `training_step()`
> Hello thanks for the great work! > > I was testing the model on single-line images but with multiple words separated by whitespace. However, it seems PARSeq does not...
Thanks for your inquiry. I've actually looked at the Hugging Face Hub when I created the Gradio demo. Totally forgot about it but I'll take a look again week after...
Models have been uploaded to https://huggingface.co/baudm/
**TODO:** need to update documentation
The `pretained` parameter is provided for easy fine-tuning of the released weights. If you want to load your own checkpoint, use the `ckpt_path` parameter. This expects a Lightning checkpoint (weights...
Sorry didn't catch the note about `test` and `read`, `ckpt_path` is for `train.py` only. For `test` and `read`, see: https://github.com/baudm/parseq#evaluation The very first sample command shows how to use your...
I think you're using a wrong checkpoint. You can check by manually loading the checkpoint using `torch.load` and checking the values stored in the hyperparameters key.
If you're referring to PARSeq's runtime parameters `decode_ar` and `refine_iters`, then yes you may modify them without retraining the model. The hyperparameters are stored in the `hparams` attribute (e.g. `model.hparams`)....