E2E-MLT
E2E-MLT copied to clipboard
E2E-MLT - an Unconstrained End-to-End Method for Multi-Language Scene Text
when i read the code, i am confused about `th13 = (2 * xc - input_W - 1) / (input_W - 1)`, cloud you give some links for explain this...
@MichalBusta Hi, there are 7398 characters in codec.txt, in the models.py, why self.conv11 = Conv2d(256, 8400, (1, 1), padding=(0,0))? I think it should be 7399
Traceback (most recent call last): File "demo.py", line 10, in from nms import get_boxes File "/media/chen/软件/DeepCode/E2E-MLT/nms/__init__.py", line 8, in raise RuntimeError('Cannot compile nms: {}'.format(BASE_DIR)) RuntimeError: Cannot compile nms: /media/chen/软件/DeepCode/E2E-MLT/nms 这个感觉是要编译里面的文件无法编译
why my loss is negative and 0.000000,please help child!!@MichalBusta like this: 
I started training again and noticed many characters not being identified as existing in the codec_rev. The data is from icdar2015, icdar2017 (MLT) and icdar2019 (MLT) and the provided codec.txt...
Having a docker folder in your project, I've tried to run `build_docker.sh` for environment creating. But `build_docker.sh` doesn't work. Does it have any available docker for your project?
when I train ,I meet the problem below. CUDA_VISIBLE_DEVICES="0,1" python3 train_ocr.py -train_list=sample_train_data/MLT/trainMLT.txt -valid_list=data/valid/valid.txt -model=e2e-mlt.h5 -debug=1 -batch_size=8 -num_readers=5 7398 loading model from e2e-mlt.h5 e2e-mlt.h5 2 training images in sample_train_data/MLT/trainMLT.txt 2 training...