ocrd_tesserocr
                                
                                
                                
                                    ocrd_tesserocr copied to clipboard
                            
                            
                            
                        Run tesseract with the tesserocr bindings with @OCR-D's interfaces
ocrd_tesserocr
Crop, deskew, segment into regions / tables / lines / words, or recognize with tesserocr
Introduction
This package offers OCR-D compliant workspace processors for (much of) the functionality of Tesseract via its Python API wrapper tesserocr. (Each processor is a parameterizable step in a configurable workflow of the OCR-D functional model. There are usually various alternative processor implementations for each step. Data is represented with METS and PAGE.)
It includes image preprocessing (cropping, binarization, deskewing), layout analysis (region, table, line, word segmentation), script identification, font style recognition and text recognition.
Most processors can operate on different levels of the PAGE hierarchy, depending on the workflow configuration. In PAGE, image results are referenced (read and written) via AlternativeImage, text results via TextEquiv, font attributes via TextStyle, script via @primaryScript, deskewing via @orientation, cropping via Border and segmentation via Region / TextLine / Word elements with Coords/@points.
Installation
With docker
This is the best option if you want to run the software in a container.
You need to have Docker
docker pull ocrd/tesserocr
To run with docker:
docker run -v path/to/workspaces:/data ocrd/tesserocr ocrd-tesserocrd-crop ...
From PyPI and Tesseract provided by system
If your operating system / distribution already provides Tesseract 4.1 or newer, then just install its development package:
# on Debian / Ubuntu:
sudo apt install libtesseract-dev
Otherwise, recent Tesseract packages for Ubuntu are available via PPA alex-p, which has up-to-date builds of Tesseract and its dependencies:
# on Debian / Ubuntu
sudo add-apt-repository ppa:alex-p/tesseract-ocr
sudo apt-get update
sudo apt install libtesseract-dev
Once Tesseract is available, just install ocrd_tesserocr from PyPI server:
pip install ocrd_tesserocr
We strongly recommend setting up a venv first.
From git
Use this option if there is no suitable prebuilt version of Tesseract available on your system, or you want to change the source code or install the latest, unpublished changes.
git clone https://github.com/OCR-D/ocrd_tesserocr
cd ocrd_tesserocr
# install Tesseract:
sudo make deps-ubuntu # system dependencies just for the build
make deps
# install tesserocr and ocrd_tesserocr:
make install
We strongly recommend setting up a venv first.
Models
Tesseract comes with synthetically trained models for languages (tesseract-ocr-{eng,deu,deu_latf,...}
or scripts (tesseract-ocr-script-{latn,frak,...}). In addition, various models
trained on scan data are available from the community.
Since all OCR-D processors must resolve file/data resources
in a standardized way,
and we want to stay interoperable with standalone Tesseract
(which uses a single compile-time tessdata directory),
ocrd-tesserocr-recognize expects the recognition models to be installed
in its module resource location only.
The module location is determined by the underlying Tesseract installation
(compile-time tessdata directory, or run-time $TESSDATA_PREFIX environment variable).
Other resource locations (data/system/cwd) will be ignored, and should not be used
when installing models with the Resource Manager (ocrd resmgr download).
To see the module resource location of your installation:
ocrd-tesserocr-recognize -D
For a full description of available commands for resource management, see:
ocrd resmgr --help
ocrd resmgr list-available --help
ocrd resmgr download --help
ocrd resmgr list-installed --help
Note: (In previous versions, the resource locations of standalone Tesseract and the OCR-D wrapper were different. If you already have models under
$XDG_DATA_HOME/ocrd-resources/ocrd-tesserocr-recognize, usually~/.local/share/ocrd-resources/ocrd-tesserocr-recognize, then consider moving them to the new default underocrd-tesserocr-recognize -D, usually/usr/share/tesseract-ocr/4.00/tessdata, or alternatively overriding the module directory by settingTESSDATA_PREFIX=$XDG_DATA_HOME/ocrd-resources/ocrd-tesserocr-recognizein the environment.)
Cf. OCR-D model guide.
Models always use the filename suffix .traineddata, but are just loaded by their basename.
You will need at least eng and osd installed (even for segmentation and deskewing),
probably also Latin and Fraktur etc. So to get minimal models, do:
ocrd resmgr download ocrd-tesserocr-recognize eng.traineddata
ocrd resmgr download ocrd-tesserocr-recognize osd.traineddata
(This will already be installed if using the Docker or git installation option.)
As of v0.13.1, you can configure ocrd-tesserocr-recognize to select models dynamically segment by segment,
either via custom conditions on the PAGE-XML annotation (presented as XPath rules),
or by automatically choosing the model with highest confidence.
Usage
For details, see docstrings in the individual processors
and ocrd-tool.json descriptions,
or simply --help.
Available OCR-D processors are:
- ocrd-tesserocr-crop
(simplistic)
- sets 
Borderof pages and addsAlternativeImagefiles to the output fileGrp 
 - sets 
 - ocrd-tesserocr-deskew
(for skew and orientation; mind 
operation_level)- sets 
@orientationof regions or pages and addsAlternativeImagefiles to the output fileGrp 
 - sets 
 - ocrd-tesserocr-binarize
(Otsu – not recommended, unless already binarized and using 
tiseg)- adds 
AlternativeImagefiles to the output fileGrp 
 - adds 
 - ocrd-tesserocr-recognize
(optionally including segmentation; mind 
segmentation_levelandtextequiv_level)- adds 
TextRegions,TableRegions,ImageRegions,MathsRegions,SeparatorRegions,NoiseRegions,ReadingOrderandAlternativeImagetoPageand sets their@orientation(optionally) - adds 
TextRegions toTableRegions and sets their@orientation(optionally) - adds 
TextLines toTextRegions (optionally) - adds 
Words toTextLines (optionally) - adds 
Glyphs toWords (optionally) - adds 
TextEquiv 
 - adds 
 - ocrd-tesserocr-segment
(all-in-one segmentation – recommended; delegates to 
recognize)- adds 
TextRegions,TableRegions,ImageRegions,MathsRegions,SeparatorRegions,NoiseRegions,ReadingOrderandAlternativeImagetoPageand sets their@orientation - adds 
TextRegions toTableRegions and sets their@orientation - adds 
TextLines toTextRegions - adds 
Words toTextLines - adds 
Glyphs toWords 
 - adds 
 - ocrd-tesserocr-segment-region
(only regions – with overlapping bboxes; delegates to 
recognize)- adds 
TextRegions,TableRegions,ImageRegions,MathsRegions,SeparatorRegions,NoiseRegions andReadingOrdertoPageand sets their@orientation 
 - adds 
 - ocrd-tesserocr-segment-table
(only table cells; delegates to 
recognize)- adds 
TextRegions toTableRegions 
 - adds 
 - ocrd-tesserocr-segment-line
(only lines – from overlapping regions; delegates to 
recognize)- adds 
TextLines toTextRegions 
 - adds 
 - ocrd-tesserocr-segment-word
(only words; delegates to 
recognize)- adds 
Words toTextLines 
 - adds 
 - ocrd-tesserocr-fontshape
(only text style – via Tesseract 3 models)
- adds 
TextStyletoWords 
 - adds 
 
The text region @types detected are (from Tesseract's PolyBlockType):
paragraph: normal block (aligned with others in the column)floating: unaligned block (is in a cross-column pull-out region)heading: block thatspans more than one columncaption: block fortext that belongs to an image
If you are unhappy with these choices, then consider post-processing
with a dedicated custom processor in Python, or by modifying the PAGE files directly
(e.g. xmlstarlet ed --inplace -u '//pc:TextRegion/@type[.="floating"]' -v paragraph filegrp/*.xml).
All segmentation is currently done as bounding boxes only by default, i.e. without precise polygonal outlines. For dense page layouts this means that neighbouring regions and neighbouring text lines may overlap a lot. If this is a problem for your workflow, try post-processing like so:
- after line segmentation: use 
ocrd-cis-ocropy-resegmentfor polygonalization, orocrd-cis-ocropy-clipon the line level - after region segmentation: use 
ocrd-segment-repairwithplausibilize(andsanitizeafter line segmentation) 
It also means that Tesseract should be allowed to segment across multiple hierarchy levels at once, to avoid introducing inconsistent/duplicate text line assignments in text regions, or word assignments in text lines. Hence,
- prefer 
ocrd-tesserocr-recognizewithsegmentation_level=region
overocrd-tesserocr-segmentfollowed byocrd-tesserocr-recognize,
if you want to do all in one with Tesseract, - prefer 
ocrd-tesserocr-recognizewithsegmentation_level=line
overocrd-tesserocr-segment-linefollowed byocrd-tesserocr-recognize,
if you want to do everything but region segmentation with Tesseract, - prefer 
ocrd-tesserocr-segmentoverocrd-tesserocr-segment-region
followed by (ocrd-tesserocr-segment-tableand)ocrd-tesserocr-segment-line,
if you want to do everything but recognition with Tesseract. 
However, you can also run ocrd-tesserocr-segment* and ocrd-tesserocr-recognize
with shrink_polygons=True to get polygons by post-processing each segment,
shrinking to the convex hull of all its symbol outlines.
Testing
make test
This downloads some test data from https://github.com/OCR-D/assets under repo/assets,
and runs some basic test of the Python API as well as the CLIs.
Set PYTEST_ARGS="-s --verbose" to see log output (-s) and individual test results (--verbose).