convolutional-handwriting-gan icon indicating copy to clipboard operation
convolutional-handwriting-gan copied to clipboard

Training on numbers dataset !

Open AhmedAl93 opened this issue 4 years ago • 9 comments

Hello,

Thank you for this amazing work, really useful for the DS community !

I have an issue when I try to train the model from scratch on images containing mainly digits (either dates "dd/mm/yy" or simple sequences from in ICDAR 2013 dataset) .

The problem is that, at some point, generator hinge loss becomes NAN (in ScrabbleGAN_baseModel.backward_G function), the reason behind this is the tensor "ones_img" in ScrabbleGAN_baseModel.get_current_visuals() becomes NAN in the first place.

Please, I want to know how to avoid this situation, thanks in advance for your help !

P.S. Here are some logs : (loss_G and dis_fake value represent generator hinge loss and ones_img tensor respectively) logs_github

AhmedAl93 avatar Sep 18 '20 13:09 AhmedAl93

Did you try to run the regular IAM/RIMES experiment from scratch?

rlit avatar Sep 22 '20 13:09 rlit

Yes, I trained successfully the model on IAM from scratch

AhmedAl93 avatar Sep 25 '20 09:09 AhmedAl93

I have the same problem after I changed the alphabet (only lowercase letters and digits). Everything worked fine on my own dataset, until I tried again with an adjusted alphabet. After the first epoch, the real and fake OCR loss become negative.

Edit: After changing the alphabet back to the original one (alphabetEnglish), the negative losses disappeared again, so most likely the issue occurs when characters are encoded or decoded?

@AhmedAl93 Have you found a solution to this problem?

kymillev avatar Oct 02 '20 16:10 kymillev

@kymillev No solution until now :/

AhmedAl93 avatar Oct 05 '20 13:10 AhmedAl93

Hi @kymillev and @AhmedAl93, Thanks for you interest in this package, and for you patience. We try our best to respond to your questions in this challenging times.

The common cause I found when found these errors for myself was data quality, which caused the NaN loss. This might include

  • words with empty transcript/annotation.
  • words with bad/empty content (e.g. all zero)
  • words that include mainly/only punctuation or a single letter (e.g. "i")
  • words that are too narrow/too wide after the data preparation (all characters should have a similar aspect ratio), even though they are supposed to be filtered out.

try to filter the data and see if this solves the problem.

rlit avatar Oct 05 '20 15:10 rlit

One addition to @rlit points.... make sure your real images and fake images have similar size distribution. So an example... if your fake image lexicon is all 3 chars wide, then all the fake image will be 48 pixels wide (16 pixel per char is standard for the fake image generator). If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real. This point had me stuck for a while, but after fixing this and the above points from @rlit I could train on different alphabet characters (non-ascii).

darraghdog avatar Jan 27 '21 21:01 darraghdog

If the real images are different number of characters, or 3 chars but not resized to 48 wide, the discriminator will learn 48 wide is likely fake; and anything else is real.

Hi @darraghdog So this mean: the number of chars in real image must be equal to fake image, right?

chiakiphan avatar Feb 17 '21 10:02 chiakiphan

An approximate equal distribution of number of characters, resized to same width per character, I found helps a lot.

darraghdog avatar Feb 17 '21 22:02 darraghdog

@darraghdog I found the real image padding the same size although has different num of characters . so I should make same number of characters in batch ?

xiaomaxiao avatar Mar 10 '21 03:03 xiaomaxiao