Attention-ocr-Chinese-Version icon indicating copy to clipboard operation
Attention-ocr-Chinese-Version copied to clipboard

Attention OCR Based On Tensorflow

Results 52 Attention-ocr-Chinese-Version issues
Sort by recently updated
recently updated
newest added

你好,请问有关于torch上的代码吗?

最近在做古籍识别,用到了paddleocr中的attention部分作为序列预测部分,但是碰到了一个问题: 在attention的decoder部分,如果将前一时刻decoder的输出作为当前时刻的输入,模型训练效果很差,收敛很慢,准确率上不去;但是如果将前一时刻的真实标签作为当前时刻的输入,模型收敛速度直接起飞,很快训练准确率就到1,但是预测准确率一直是0,似乎是这样做直接把真实标签作为了训练模型的输入,导致模型根本没有得到训练。 但就我个人对seq2seq模型的理解,在训练时将前一时刻的真实标签作为当前时刻的输入,应该是更容易将模型往理想的方向训练,更容易收敛,模型理应训练得更好,但是出现了预测准确了一直为0的情况。我真的很困惑,不知道大佬是否可以解决一下我的疑问。

How to predict a single image (random size)?? what file do you run I run demo_reference.py file, it requere 600x150 include 4 image are combined Please, thanks

How to pass the images to the model those having different width images(Long width and short width) without resizing?

训练到后面loss越来越大,从log里面看text 都是这样的东西 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA uuuuuuuuOOOOOOOOOOOOOOOOOOOOOOOOOOOOO -- 我制造tf-record 都是用的楼主的那个程序,请问有人知道什么原因吗

Instructions for updating: Please switch to tf.train.get_or_create_global_step INFO:tensorflow:Restoring parameters from /home/ucmed/opt/python/models-master/research/attention_ocr/python/logs/model.ckpt-0 INFO 2019-01-03 02:14:41.000888: tf_logging.py: 82 Restoring parameters from /home/ucmed/opt/python/models-master/research/attention_ocr/python/logs/model.ckpt-0 INFO:tensorflow:Starting Session. INFO 2019-01-03 02:14:55.000713: tf_logging.py: 82 Starting Session. INFO:tensorflow:Saving...

During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 211, in app.run() File "/home/shakey/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "train.py", line...

Can we feed the Document images dataset instead of a small word dataset to this Architecture? What is the max-sequence length that can be used? can you please suggest me...