langdata icon indicating copy to clipboard operation
langdata copied to clipboard

Updated langdata

Open ahmed-alaa opened this issue 7 years ago • 23 comments

We need the updated langdata with the update unicharset specially for arabic language to be able to maintain the same accuracy in the new .traineddata

Thanks

ahmed-alaa avatar Aug 04 '17 10:08 ahmed-alaa

I don't know if Ray plans to update these files.

Anyway, it seems that you can now extract the unicharset and the dawg files used by the new lstm engine from the traineddata.

amitdo avatar Aug 04 '17 11:08 amitdo

Yes, using combine_tessdata is an easy way to get the unicharset and the dawg files from a traineddata file. In a 2nd step the word list can be made from those files using dawg2wordlist.

Then you can remove unwanted components, fix word lists and reverse the whole process to create your own new traineddata file.

stweil avatar Aug 04 '17 14:08 stweil

Actually, I'm trying to fine tune and continue from the new Arabic.traineddata but the newly generated traineddata file can't keep the same accuracy as Arabic.traineddata.

Also, the current unicharset of landdata repo has around 2048 data but the generated from the new traineddata around 300 only is that logic?

ahmed-alaa avatar Aug 04 '17 14:08 ahmed-alaa

Point taken. It needs updating. I was going to push until I discovered a bug with the RTL word lists. Then I also need to integrate this issues list, that I haven't looked at in a while, and rerun training.

theraysmith avatar Aug 08 '17 00:08 theraysmith

@theraysmith Is it ready for update now?

Shreeshrii avatar Oct 07 '17 10:10 Shreeshrii

@jbreiden Do you have the files to update this repo for 4.0.0?

Alternately, should we try to reverse engineer files from tessdata_fast, they will not be complete - (config, wordlist, numbers, punc, unicharset).

Shreeshrii avatar Mar 12 '18 10:03 Shreeshrii

Do you have the files to update this repo for 4.0.0?

No, I don't. But I have been (and continue to) look into this.

jbreiden avatar Mar 12 '18 16:03 jbreiden

Hmm. Sorry. I thought I had done this in September. The Google repo is up-to-date apart from the redundant files that need to be deleted. I'll work with Jeff to get this done.

theraysmith avatar Mar 20 '18 03:03 theraysmith

Thanks!

Will the training process, tesstrain.sh and related scripts also need changes?

Shreeshrii avatar Mar 20 '18 04:03 Shreeshrii

Also, what about the possibility of training from scanned images?

Shreeshrii avatar Mar 20 '18 05:03 Shreeshrii

Also, what about the possibility of training from scanned images?

It is possible and seems to work pretty good, as I heard from @wrznr.

stweil avatar Mar 20 '18 05:03 stweil

@stweil Do you know how the box files for the scanned images were created?

ASAIK tesseract makebox generated box files do not match the format of files from text2image.

Shreeshrii avatar Mar 20 '18 05:03 Shreeshrii

@theraysmith

  1. Since training depends on the fonts used, I suggest also uploading a file with fontlist used for training in every language and script in their subdirectories in langdata. This file can then be referred to by tesstrain.sh/language_specific.sh.

  2. What is the recommended method for combining languages to create a script traineddata?

  3. Is it possible to use multiple languages/scripts to continue from for creating a 'script' type of traineddata by finetuning? If so, how?

Shreeshrii avatar Mar 21 '18 08:03 Shreeshrii

On Wed, Mar 21, 2018 at 1:28 AM Shreeshrii [email protected] wrote:

@theraysmith https://github.com/theraysmith

Since training depends on the fonts used, I suggest loading a file with fontlist used for training in every language and script in their own subdirectories. This file can then be referred to by tesstrain.sh/language_specific.sh.

Yes, I have a list of fonts used for each training, and can add that to the langdata.

Is it possible to use multiple languages to continue from for creating a 'script' type of traineddata by finetuning?

Unfortunately not. I did have an idea for a better multi-language implementation that would cleanly use models from multiple languages at once, but that depends on getting rid of the old code, and moving the multi-language functionality into the beam search. Until the old code is gone, that would be very messy.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/langdata/issues/83#issuecomment-374863461, or mute the thread https://github.com/notifications/unsubscribe-auth/AL056bXfAPBtr62lkf6Ma2WzI5Zv7CAVks5tgg8cgaJpZM4Otkog .

-- Ray.

theraysmith avatar Mar 21 '18 17:03 theraysmith

@Shreeshrii Your right, creating box files has been done by using an extra script. It is rather straightforward:

  • split your GT line into characters
  • print them ‘one-char-per-line’ and add the coordinates of the whole line to each character
  • add a tab stop (as an EOL indicator) to the end of the line sequence with coordinates +1

wrznr avatar Mar 22 '18 10:03 wrznr

related: https://github.com/tesseract-ocr/tesseract/issues/1276#issuecomment-358970736

amitdo avatar Mar 22 '18 10:03 amitdo

@wrznr's method is similar but easier than my proposal.

amitdo avatar Mar 22 '18 10:03 amitdo

@wrznr

Please share the script, if possible. I would like to test it for Indic/complex scripts. It will also be useful to many others who have been asking for this feature.

You could create a PR to put it in https://github.com/tesseract-ocr/tesseract/tree/master/contrib

Thanks!

Shreeshrii avatar Mar 22 '18 11:03 Shreeshrii

It won't work well for complex scripts like the Indic scripts.

amitdo avatar Mar 22 '18 11:03 amitdo

@theraysmith @jbreiden

Any update regarding this???

On Tue 20 Mar, 2018, 8:52 AM theraysmith, [email protected] wrote:

Hmm. Sorry. I thought I had done this in September. The Google repo is up-to-date apart from the redundant files that need to be deleted. I'll work with Jeff to get this done.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/langdata/issues/83#issuecomment-374460335, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2_ozSnNQITRG_kE4gS1yHJgQXSJyKuks5tgHXcgaJpZM4Otkog .

Shreeshrii avatar Apr 03 '18 12:04 Shreeshrii

@Shreeshrii FYI: https://github.com/OCR-D/ocrd-train

wrznr avatar May 03 '18 14:05 wrznr

@wrznr Thank you for the makefile for doing LSTM training from scratch. I will give it a try.

Do you also have a variant for doing fine tuning or adding a layer?

Shreeshrii avatar May 03 '18 14:05 Shreeshrii

https://github.com/tesseract-ocr/langdata_lstm has the updated langdata files fo LSTM training.

This issue can be closed.

Shreeshrii avatar Aug 24 '18 08:08 Shreeshrii