convolutional-handwriting-gan
convolutional-handwriting-gan copied to clipboard
Training on numbers dataset !
Hello,
Thank you for this amazing work, really useful for the DS community !
I have an issue when I try to train the model from scratch on images containing mainly digits (either dates "dd/mm/yy" or simple sequences from in ICDAR 2013 dataset) .
The problem is that, at some point, generator hinge loss becomes NAN (in ScrabbleGAN_baseModel.backward_G function), the reason behind this is the tensor "ones_img" in ScrabbleGAN_baseModel.get_current_visuals() becomes NAN in the first place.
Please, I want to know how to avoid this situation, thanks in advance for your help !
P.S. Here are some logs :
(loss_G and dis_fake value represent generator hinge loss and ones_img tensor respectively)
Did you try to run the regular IAM/RIMES experiment from scratch?
Yes, I trained successfully the model on IAM from scratch
I have the same problem after I changed the alphabet (only lowercase letters and digits). Everything worked fine on my own dataset, until I tried again with an adjusted alphabet. After the first epoch, the real and fake OCR loss become negative.
Edit: After changing the alphabet back to the original one (alphabetEnglish), the negative losses disappeared again, so most likely the issue occurs when characters are encoded or decoded?
@AhmedAl93 Have you found a solution to this problem?
@kymillev No solution until now :/
Hi @kymillev and @AhmedAl93, Thanks for you interest in this package, and for you patience. We try our best to respond to your questions in this challenging times.
The common cause I found when found these errors for myself was data quality, which caused the NaN loss. This might include
- words with empty transcript/annotation.
- words with bad/empty content (e.g. all zero)
- words that include mainly/only punctuation or a single letter (e.g. "i")
- words that are too narrow/too wide after the data preparation (all characters should have a similar aspect ratio), even though they are supposed to be filtered out.
try to filter the data and see if this solves the problem.
One addition to @rlit points.... make sure your real images and fake images have similar size distribution. So an example... if your fake image lexicon is all 3 chars wide, then all the fake image will be 48 pixels wide (16 pixel per char is standard for the fake image generator). If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real. This point had me stuck for a while, but after fixing this and the above points from @rlit I could train on different alphabet characters (non-ascii).
If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real.
Hi @darraghdog So this mean: the number of chars in real image must be equal to fake image, right?
An approximate equal distribution of number of characters, resized to same width per character, I found helps a lot.
@darraghdog I found the real image padding the same size although has different num of characters . so I should make same number of characters in batch ?