FOTS_TF
FOTS_TF copied to clipboard
This an implementation of FOTS with tensorflow
I encountered this error when testing the model :''[Invalid argument: assertion failed: [width must be >= target + offset.]]'' Does anyone know why?
InvalidArgumentError (see above for traceback): assertion failed: [width must be >= target + offset.] This error caused by RoI_rotate.py line 112 roi = tf.image.crop_to_bounding_box(_affine_feature_map, 0, 0, 8, width_box) _affine_feature_map's width...
请问如何识别中文字符?
我的任务主要是中文的,套用这个方法如何实现呢?
why so slow?
Find 1 images 6055 text boxes before nms test/screenshot.png : detect 3372ms, restore 5ms, nms 147ms, recog 4796ms [timing] 8.196640491485596 
我在训练时控制台输出了poly in wrong direction的提示
请问怎么单独测试识别分支呀?没有检测标签数据,现在不知道怎么判断模型效果是否好
我在使用ICDAR2019广告牌数据训练的时候detect的loss收敛到0.015,recognize的loss收敛到20z左右就不再下降了,也尝试过调整学习率,没有什么效果,我自己在config的char里加了很多简繁体汉字,loss很高是因为中文数据集的原因吗?recognize的loss是不是太高了?我自己写的对gt数据处理的输出结果是这样的:  
您好 我想用预训练识别分支,使用360万中文数据集(Synthetic Chinese String Dataset),使用发现这个数据集是没有文字位置标签,仅文字内容标签,也并非常见的SynthText数据集标注方式,请问我该怎么改善代码,以方便读入数据训练?谢谢
models
I have trained a model on synth text dataset, the detection is okay, but recognition is very bad. https://drive.google.com/open?id=1SZaPveIjdhpkQgv6UL75VRi2c_kZIIhr
Hi @Pay20Y I was training model with 200000 data like this: Img:  label: 2,3,48,3,48,24,3,25,quyền 51,5,81,4,81,22,51,23,hạn 84,5,110,4,111,19,84,20,của Loss:  but while i test, results like: img: ...