doctr icon indicating copy to clipboard operation
doctr copied to clipboard

Multilingual support

Open decadance-dance opened this issue 1 year ago β€’ 49 comments

πŸš€ The feature

Support of multiple languages (accordingly VOCABS["multilingual"]) by pretrained models.

Motivation, pitch

It would be great to use models which supports multiple languages because it significantly improve user experience in various cases.

Alternatives

No response

Additional context

No response

decadance-dance avatar Aug 20 '24 12:08 decadance-dance

Hi @decadance-dance :wave:,

Have you already tried: docTR: https://huggingface.co/Felix92/doctr-torch-parseq-multilingual-v1 OnnxTR: https://huggingface.co/Felix92/onnxtr-parseq-multilingual-v1 ? :)

Depends a bit if there is any data from mindee we could use. Question goes to @odulcy-mindee ^^

felixdittrich92 avatar Aug 20 '24 13:08 felixdittrich92

Hi, @felixdittrich92 I used docTR more than half year but have never faced this multilingual model, lol. So, I am gonna try it, thanks.

decadance-dance avatar Aug 20 '24 16:08 decadance-dance

Ah let's keep this issue open there is more todo i think :)

felixdittrich92 avatar Aug 20 '24 16:08 felixdittrich92

Hi, @felixdittrich92 I used docTR more than half year but have never faced this multilingual model, lol. So, I am gonna try it, thanks.

Happy about an feedback how it works for you :) The model was fine tuned only on synth data.

felixdittrich92 avatar Aug 21 '24 07:08 felixdittrich92

Depends a bit if there is any data from mindee we could use. Question goes to @odulcy-mindee ^^

Unfortunately, we don't have such data

odulcy-mindee avatar Aug 27 '24 08:08 odulcy-mindee

@decadance-dance For training such recognition models i don't see a problem.. we can generate synth train data and need in a best case only real val samples. But for detection we would need real data that's the main issue.

In general we would need the help of the community to collect documents (newspaper, receipt photos, etc.) in divers langauges (can be unlabeled). / This would need a license to sign that we can freely use this data. With enough divers data we could use Azure Doc AI for example to pre-label this data. Later on i wouldn't see an issue to open source this dataset.

But not sure how to trigger such "event" :sweat_smile: @odulcy-mindee

felixdittrich92 avatar Aug 27 '24 08:08 felixdittrich92

Hello =) I found some public dataset for various tasks english documents mathematics documents latex ocr latex ocr chinese ocr chinese ocr chinese ocr

nikokks avatar Sep 06 '24 13:09 nikokks

Moreover it should be interesting for Chinese detection models to add multiple recognition data in the same image without intersection. This should help for a Chinese detection model to perform better without real detection data. Anyone interested in creating random multilingual data for detection models (hindi, chinese, etc.) ?

nikokks avatar Sep 06 '24 13:09 nikokks

Hi @nikokks πŸ˜ƒ Recognition should not be such a big deal i found already a good way to generate such data for fine tuning.

To collect multilingual data for detection is troublesome because it should be real data (or if possible really good generated ones / for example with a fine tuned FLUX model maybe !?) We need different kinds of layouts/documents (newspapers, invoices, receipts, cards, etc.) so the data should come close to real use cases (not only scans also document photos etc.) :)

felixdittrich92 avatar Sep 06 '24 14:09 felixdittrich92

To collect multilingual data for detection is troublesome because it should be real data

Do you can estimate how much data we need to provide multilingual capabilities on the same level as only english ocr is?

decadance-dance avatar Oct 09 '24 16:10 decadance-dance

Hi @decadance-dance :wave:,

I think if we could collect ~100-150 different types of documents for each language we would have a good starting point (at the end the language doesn't matter it's more about the different char sets / fonts / text sizes) - for example: bild_design is super useful because it captures a lot of different fonts / text sizes or something "in the wild": img_03771

At the end it's more critical to take care that we really can use such images legally.

The tricky part is the detection because we need complete real data .. if we have this it should be much easier for the recognition part we could create some synth data and eval on the already collected real data.

I think if we are able to collect the data up to end of january i could provide pre-labeling via Azure's Document AI.

Currently missing parts are:

  • handwritten (for the detection model - recognition is another story)
  • chinese (symbols)
  • hindi
  • bulgarian/ukrainian/russian/serbian (cyrillic)
  • special symbols (bullet points, etc.)
  • more latin based (spanish, czech, ..)
  • ...

CC @odulcy-mindee

Lang list: https://github.com/eymenefealtun/all-words-in-all-languages

felixdittrich92 avatar Oct 10 '24 06:10 felixdittrich92

@felixdittrich92, thank you for a detailed answer. I'd help to collect data. It would be great if we can populate this initiative to our community. I think if everyone provides at least a couple of samples, then a good amount of data can be collected. BTW, Is there any flow or established process for collecting and submitting data?

decadance-dance avatar Oct 10 '24 08:10 decadance-dance

@decadance-dance Not yet ..maybe the easiest would be to create a huggingface space for this because from this you could also do easily pictures from your smartphone and under the hood we push the taken or uploaded images into an HF dataset.

In this case we could also add an agreement before any data can be uploaded that the person who uploads agrees to have all rights on the image and uploads the image with the knowledge to provide the uploaded images openly to everyone who downloads the dataset.

Wdyt ?

Again CC @odulcy-mindee :D

felixdittrich92 avatar Oct 10 '24 09:10 felixdittrich92

I found one possible dataset for printed documents for multiple languages. It is wikisource. They have text and images at the page level, originally created using some existing OCR(Google vision/tesseract) and the data has then been corrected/proofread by people. They have annotations to differentiate what has been proofread and what has not been. An example - https://te.wikisource.org/wiki/ΰ°ͺుట%3AAandhrakavula-charitramu.pdf/439. The license would be CC-BY-SA and I am expecting them to only have pulled books for which copyright has expired. Collecting fonts for various languages is a bigger problem though( because of licenses ).

ramSeraph avatar Oct 10 '24 17:10 ramSeraph

Thanks @ramSeraph for sharing i will have a look :+1:

@decadance-dance @nikokks

I created a space which can be used to collect some data (only raw data for starting) wdyt ? https://huggingface.co/spaces/Felix92/docTR-multilingual-Datacollector

Later on if we say we have collected enough raw data we can filter the data and pre-label with Azure Document AI.

felixdittrich92 avatar Oct 17 '24 14:10 felixdittrich92

Sounds good to me. Thanks

Ρ‡Ρ‚, 17 ΠΎΠΊΡ‚. 2024β€―Π³. Π² 16:37, Felix Dittrich @.***>:

Thanks @ramSeraph https://github.com/ramSeraph for sharing i will have a look πŸ‘

@decadance-dance https://github.com/decadance-dance @nikokks https://github.com/nikokks

I created a space which can be used to collect some data (only raw data for starting) wdyt ? https://huggingface.co/spaces/Felix92/docTR-multilingual-Datacollector

Later on if we say we have collected enough raw data we can filter the data and pre-label with Azure Document AI.

β€” Reply to this email directly, view it on GitHub https://github.com/mindee/doctr/issues/1699#issuecomment-2419734206, or unsubscribe https://github.com/notifications/unsubscribe-auth/AURNXMCDXAODUPBYE6BNUQLZ37DTLAVCNFSM6AAAAABMZ2E2AGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJZG4ZTIMRQGY . You are receiving this because you were mentioned.Message ID: @.***>

decadance-dance avatar Oct 21 '24 07:10 decadance-dance

@decadance-dance @nikokks @ramSeraph @allOther

I created an request to the mindee team to provide support on this task. https://mindee-community.slack.com/archives/C02HGHMUJH0/p1730452486444309

Would be nice if you could write a comment in the thread about your needs to support this :pray:

felixdittrich92 avatar Nov 01 '24 09:11 felixdittrich92

First stage would be to improve the detection models, for the sec stage the recognition part we could generate additional synthetic data

felixdittrich92 avatar Nov 12 '24 07:11 felixdittrich92

Short update here:

I collected ~30k samples containing: ~7k arabic ~1k hindi ~1k chinese ~1k thai ~4k cyrillic ~1k greek ~5k additional latin extended (polish, spanish, and so on) (including ~15% handwritten - most russian, arabic and latin) ~10k receipts around the globe

Now i need to find a way to annotate all these data - AWS Textract & Azure Document AI failed as possible useful prelabeling solution

Best results reached with docTR/OnnxTR (only detection) - but still to much issues to include it directly into our dataset for pretraining.

felixdittrich92 avatar Nov 19 '24 15:11 felixdittrich92

Now i need to find a way to annotate all these data - AWS Textract & Azure Document AI failed as possible useful prelabeling solution

Why did they faile?

decadance-dance avatar Nov 19 '24 17:11 decadance-dance

Now i need to find a way to annotate all these data - AWS Textract & Azure Document AI failed as possible useful prelabeling solution

Why did they faile?

Detection results was really worse for many samples

felixdittrich92 avatar Nov 19 '24 17:11 felixdittrich92

For training such recognition models i don't see a problem.. we can generate synth train data and need in a best case only real val samples.

how do you think what way of generating synth word text is more beneficial? a) use predefined vocab and randomly sample characters from it into a given range, like you are doing in _WordGenerator b) use predefined text corpus and randomly sample entire words from it c) combine (a) and (b)

decadance-dance avatar Nov 19 '24 17:11 decadance-dance

Detection results was really worse for many samples

How did you evaluate them? As I understood your data is not annotated yet. Did you check samples manually?

decadance-dance avatar Nov 19 '24 17:11 decadance-dance

AWS Textract & Azure Document AI failed as possible useful prelabeling solution

maybe easy-ocr will work for you?

decadance-dance avatar Nov 19 '24 17:11 decadance-dance

Detection results was really worse for many samples

How did you evaluate them? As I understood your data is not annotated yet. Did you check samples manually?

I OCR'd some samples with Azure Document AI and Textract and wrote a script to visualize these samples for OnnxTR i prelabeled all Files and also checked the Same files manually

felixdittrich92 avatar Nov 19 '24 18:11 felixdittrich92

AWS Textract & Azure Document AI failed as possible useful prelabeling solution

maybe easy-ocr will work for you?

Haven't tested yet with this data but if i remember docTR was in the most cases more accure

felixdittrich92 avatar Nov 19 '24 18:11 felixdittrich92

For training such recognition models i don't see a problem.. we can generate synth train data and need in a best case only real val samples.

how do you think what way of generating synth word text is more beneficial? a) use predefined vocab and randomly sample characters from it into a given range, like you are doing in _WordGenerator b) use predefined text corpus and randomly sample entire words from it c) combine (a) and (b)

I would go with option b and augment a fixed part of this data (words) with low frequent characters (like the % symbol).

I did the same to train the multilingual parseq model :)

felixdittrich92 avatar Nov 19 '24 18:11 felixdittrich92

I think the only option is to label a part of the data manually -> fine tune -> pre-label -> correct and again in an iterative process πŸ™ˆπŸ˜… (really time consuming)

felixdittrich92 avatar Nov 19 '24 18:11 felixdittrich92

I had an idea that could help speed things up when dealing with documents. What if there were a selectable database of PDFs or other documents (DOCX, PPTX) in the desired languages? Then, you could extract the text with certainty, convert the PDF into the desired image format with the required resolution/DPI, adjust the bounding boxes according to the resolutions and text, and voilΓ . I have around 80k selectable documents in Brazilian Portuguese (latin) and can start testing to see if this works.

murilosimao avatar Feb 03 '25 19:02 murilosimao

Hey @murilosimao πŸ‘‹,

Yep sounds great feel free to update here if you have some results πŸ‘

I will (hopefully soon) also discuss a strategy with @sebastianMindee

felixdittrich92 avatar Feb 04 '25 08:02 felixdittrich92