Robert Sachunsky
Robert Sachunsky
> ... or new parameters for the processors which use TF. Yes. My point was: changes to the modules require maximal effort and are difficult to enforce/maintain uniformly. > I...
Getting back onto the original topic of [changing the process API](https://github.com/OCR-D/core/issues/322#issuecomment-592775407) to support catching page-level processor failures or employ page-level parallelism in core, besides the concerns raised already about [multi-fileGrp...
I wrote a [summary](https://hackmd.io/23-JzLp_Q96cb6T0ttoFIA) for the above two aspects of parallelisation and error recovery, plus the additional aspect of overhead, with a proposal for an overall solution (workflow server and...
Alternatively, one could just `mogrify -auto-orient image` using ImageMagick before import...
Oh, that would be great – thanks in advance! I expect just initializing with your pre-trained models and training on new data would quickly make the model forget your large...
I've merged the current master to give the new CI a try. Unfortunately, the credentials problem persists... The current Github workflow configuration still tries to _build and push Docker images...
> If you work on the fork it will not work. Dockerhub credentials are added to the original repo and only accessible here. > I don't see much value in...
Oh, now I understand. I am a bit surprised you prefer Docker for testing over a native venv, but in that case, I always try to keep an up-to-date `bertsky/coco_explorer`...
> > Please chceck the current workflow in master if this will work. So partial success it is. PR still triggers the publish job (which fails), but also the build...
Along the same lines, adding **multithreading** to the Streamlit server could be worthwhile to improve performance. This seems to be quite an undertaking at the moment though, cf. comments [here](https://discuss.streamlit.io/t/issue-with-threading-text-not-displayed/3981/6)...