tesstrain icon indicating copy to clipboard operation
tesstrain copied to clipboard

Page level images

Open Shreeshrii opened this issue 7 years ago • 49 comments
trafficstars

The script works for line level images.

I have a number of scanned page images with ground truth files.

Does OCR-D project have any tools to segment it to line images with corresponding ground truth text?

Shreeshrii avatar May 04 '18 08:05 Shreeshrii

Unfortunately, not yet. we are working on something in this direction to align the fulltexts from the German Text Archive with the corresponding images. Hopefully, I can get back to you soon with some tool. Additionally, @jbaiter does some things along these lines...

wrznr avatar May 04 '18 08:05 wrznr

Thanks. It will be a useful tool.

I am trying to use some ocropus tools to split the page into line images. I will either ocr the line images to create text to be corrected for ground truth, or type it fully,

Shreeshrii avatar May 04 '18 10:05 Shreeshrii

@Shreeshrii , you could try this approach:

  1. Split the page image into line images with ocropus/kraken
  2. Run the most suitable OCR model on the line images
  3. For each line in the resulting OCR, find the ground truth line with the lowest edit distance (e.g. Levenshtein)
  4. Every matching line with an edit distance above a certain threshold should have fairly high chance of being a correct match

One problem with this approach is that segmentation errors (e.g. a line gets cut in two, a few words at the beginning/end are missing, etc) lead to false positives. This also assumes that your ground-truth is split into lines. If not, you will have to modify step 3 to slide each OCR line over the ground truth and determine the best match that way, with some added heuristics to not match partial words, etc.

jbaiter avatar May 04 '18 11:05 jbaiter

@jbaiter

I want to use it for Devanagari script. I had looked at ocropus quite sometime back. I am not sure if ocropus/kraken supports Devanagari.

Do you know if it has support for complex scripts?

Shreeshrii avatar May 04 '18 12:05 Shreeshrii

@Shreeshrii There are some papers with text recognition results with Ocropus on Devanagari script. However, I am not aware of any shared model you could reuse. You can find some models for Ocropus here https://github.com/tmbdev/ocropy/wiki/Models

However, instead of 1.+2. you can also use tesseract for creating a hocr output and then use hocr-extract-images to create the line images and texts.

Moreover, if you have the ground truth in hocr format you can use hocr-eval for the evaluation with your recognition format. Or, do you have the ground truth only as a text with the geometric information?

zuphilip avatar May 11 '18 19:05 zuphilip

@zuphilip I have also read about Devanagari training for ocropus but the models are not available (I had looked couple of years ago or so).

Thank you for the link to specific HOCR tools. I will give them a try.

The ground truth I have are plain text files matching the scanned images without any positional info. I was able to use them to eval OCR accuracy by comparing to recognized output.

Shreeshrii avatar May 12 '18 03:05 Shreeshrii

https://github.com/Shreeshrii/imagessan/tree/master/groundtruthimages

Sanskrit language samples in Devanagari script.

Shreeshrii avatar May 12 '18 03:05 Shreeshrii

Ping @adnanulhasan who may still have some sources from the Ocropus training with Deviangari script texts.

zuphilip avatar May 12 '18 07:05 zuphilip

you can also use tesseract for creating a hocr output and then use hocr-extract-images to create the line images and texts.

@zuphilip Thank you. I was able to use it for Devanagari script files also. The commands which worked for me (it took a little experimenting to get it right).

:~/hocr-tools$ PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./shree/ -p ./shree/san.pothi-%03d.png  ./shree/Mudgala-Test-01.hocr
:~/hocr-tools$ PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./shree/ -p ./shree/san.pothi-%03d.tif  ./shree/Mudgala-Test-01.hocr

Shreeshrii avatar May 28 '18 08:05 Shreeshrii

The other option which I had used was


    # perform binarization
    ./ocropus-nlbin tests/devatest?.png -o devatest -n -g

    # perform page layout analysis
    ./ocropus-gpageseg 'devatest/????.bin.png' -n

And then running tesseract to get text and correcting it.

Shreeshrii avatar May 28 '18 09:05 Shreeshrii

In case it is helpful to others looking for a solution, posting below a bash script I use for -

  1. taking a scanned page image,
  2. running tesseract with hocr option on it,
  3. running hocr tools to split it into lines.

The ground truth needs to be updated manually, if there is an existing page level ground truth file, copy line by line into the lines ground truth.

#!/bin/bash
SOURCE="./myfiles/"
lang=san
set -- "$SOURCE"*.png
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract --tessdata-dir ../tessdata_fast   "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source venv/bin/activate
    PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

Shreeshrii avatar Sep 09 '18 13:09 Shreeshrii

Occasionally, the line images are a bit wider than the text and so they catch the letters from the preceding or the subsequent lines. Is this a problem for the training (i.e. should such images be fixed to ensure that they do not contain top/bottom of the neighbouring lines)?

SultanOrazbayev avatar Nov 23 '18 02:11 SultanOrazbayev

I think this is a problem. It would be great if you could provide a corresponding example, maybe in a specific GitHub issue. Many thanks in advance!

wrznr avatar Nov 23 '18 09:11 wrznr

Please see https://github.com/tesseract-ocr/tesseract/pull/2231 for the WordStr format box files.

Shreeshrii avatar Feb 11 '19 03:02 Shreeshrii

@bertsky: Concerning the comment by @SultanOrazbayev, clipping may help here, right? Is it possible, to get polygonal line shapes from tesseract?

wrznr avatar Aug 29 '19 11:08 wrznr

It is possible to get polygon-based segmentation from Tesseract: with BlockPolygon from the page iterator delivered by AnalyseLayout. There is a bug somewhere though: sometimes, paths self-intersect, which even Tesseract itself does not cope with very well (as can be seen by the mask images produced internally, available with GetImage when also passing the raw image again). Maybe by postprocessing one can circumvent this issue – using shapely.geometry functions to self-disjoin paths, or similar.

But even without polygon masked line images you could try clipping to rid of the intrusions from neighbours, yes. Or alternatively, do resegmentation (i.e. increase coherence via another line segmentation). Both methods are already available as OCR-D processors, as is Tesseract region segmentation (optionally with polygons).

But you want line segmentation with polygons here, right? I am afraid Tesseract's API does not offer that – only for the "block" level!

Should I give details (what/where/how) on using clipping and resegmentation?

bertsky avatar Aug 29 '19 11:08 bertsky

Hi, I am using ocr-D for preparing traindata and i am try to extract data from dot matrix font pdf.i created some sample in dot matrix tif image and gt.txt then i am using tesseract to extract my pdf but it extract only some letters and some time its consider 0 as 8.please give solution to fix this issue

kabilankiruba avatar Sep 16 '19 13:09 kabilankiruba

@kabilankiruba This is clearly not related to this thread. Pls. consider to contact the Tesseract user group.

wrznr avatar Oct 01 '19 11:10 wrznr

Is there any tool which will display the line images and gt.txt side by side for easy correction after generating the files from HOCR output (as suggested here).

I do not want to run a web server to do this.

Can it be done via javascript/html - show an image and its gt.txt - save corrected gt.txt and have an arrow/option to display next image and gt.txt.

Basically, i would like to run this on my windows10 desktop.

Shreeshrii avatar Jan 12 '20 14:01 Shreeshrii

@kba @cneud @stweil Can you recommend a tool for this purpose? Wasn't there such a thing in OCRopy?

wrznr avatar Jan 13 '20 07:01 wrznr

https://github.com/OpenArabic/OCR_GS_Data/blob/master/_doublecheck_viewer.py creates HTML5 based webpage for Reviewing OCR Training/Testing Data.

Shreeshrii avatar Jan 13 '20 10:01 Shreeshrii

Can you recommend a tool for this purpose?

  • ocropus-gtedit
  • ketos transcribe (in kraken)
  • https://github.com/qurator-spk/neath (server based)
  • https://github.com/UB-Mannheim/ocr-gt-tools (server-based ocropus-gtedit)

Can it be done via javascript/html - show an image and its gt.txt - save corrected gt.txt and have an arrow/option to display next image and gt.txt.

Both kraken's and ocropy's transcription do that. the hocrjs viewer has an option to make items contenteditable but no way to save it.

kba avatar Jan 13 '20 11:01 kba

Thank you. I think the following workflow will do the trick.

./ocropus-nlbin bookpages/*.png -o book

 ./ocropus-gpageseg 'book/????.bin.png'

 ./ocropus-gtedit html -f 20 -H 48   ./book/*/*.png

writing correction.html

Transfer and browse correction.html on Windows. Add the ground truth text for each line image. Save HTML as complete webpage. Transfer file back to Linux.

./ocropus-gtedit extract -p bookgt correction.html

Shreeshrii avatar Jan 13 '20 14:01 Shreeshrii

@Shreeshrii could you please clarify how you match the extracted ground truth txt files from ocropy/ocropus with the line level images obtained with your script? After using ocropus-nlbin, the original filename is "lost" (ocropus uses numerical increasing values).

Using tesstrain, I assume that you don't train tesseract on the line level images and gt obtained with ocropus? These images are slightly different compared to the line images obtained with your script (which uses tesseract directly) because of preprocessing with ocropus-nlbin. But please correct me if I am wrong.

I am confused what the current workflow is to correct the extracted ground truth:

  • Using just your script and edit the gt files manually
  • Use only ocropy
  • Combine your script together with the tools from ocropus?
    1. Change the filenames to the same numerical increasing values that ocropus uses
    2. Run script
    3. Work with ocropus commands on the line images obtained from your script
    4. use the line images obtaind from tesseract script with the edited gt from ocropus

fjp avatar Jan 31 '20 14:01 fjp

@fjp These are two different approaches. I have used both separately, only on experimental basis, mostly for testing.

Shreeshrii avatar Jan 31 '20 15:01 Shreeshrii

Hello, are there still any plans to integrate some kind of tool into tesstrain?

I was facing similar requirements for generation of training data in a windows-env, which ended up in a small Script that extracts both coords and textdata from an existing ALTO-file and writes training-data-pairs.

M3ssman avatar Apr 02 '20 07:04 M3ssman

@M3ssman This would be a great contribution. Especially, since it opens up a way to use Aletheia-created GT with tesstrain.

wrznr avatar Apr 02 '20 09:04 wrznr

@wrznr I must confess: There are some caveats. It adds another dependency, python-opencv. Pillow kept complaining about images >80 MB. Further, on Windows 10, one needs additionally to install C++14.0-Buildtools, which varies according to the used Python Version which is used by numpy which in turn is used by opencv.

M3ssman avatar Apr 02 '20 19:04 M3ssman

In case it is helpful to others looking for a solution, posting below a bash script I use for -

1. taking a scanned page image,

2. running tesseract with hocr option on it,

3. running hocr tools to split it into lines.

The ground truth needs to be updated manually, if there is an existing page level ground truth file, copy line by line into the lines ground truth.

#!/bin/bash
SOURCE="./myfiles/"
lang=san
set -- "$SOURCE"*.png
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract --tessdata-dir ../tessdata_fast   "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source venv/bin/activate
    PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0-gt.txt/ ./myfiles/*exp0.txt

echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

Could you please explain what each line does. I want to run it on my system but am confused on what to change @Shreeshrii

rraina97 avatar Jun 01 '20 05:06 rraina97

I want to run it on my system but am confused on what to change

Assuming that you have tesseract and hocr-tools installed, put your image (png) files in ./myfiles/ folder. Change lang=san in the bash script to whichever language you need eg. lang=eng save and run the bash script.

for each image file runs tesseract on image file to produce hocr output runs hocr-extract-images to split the image to line images with the OCRed text for the line done rename generated text file from *.txt to *.gt.txt Correct command will be the following (. instead of - in filename) rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

After this the *.gt.txt files need to be manually corrected to match the line images.

Shreeshrii avatar Jun 01 '20 08:06 Shreeshrii