CRAFT-Reimplementation
CRAFT-Reimplementation copied to clipboard
CRAFT-Pyotorch:Character Region Awareness for Text Detection Reimplementation for Pytorch
from augmentation import random_rot, crop_img_bboxes from gaussianmap import gaussion_transform, four_point_transform from generateheatmap import add_character, generate_target, add_affinity, generate_affinity, sort_box, real_affinity, generate_affinity_box
关于训练
非常感谢你公开你的代码,看了您的实现,我有几个问题请教。 1.我看到您提供的预训练模型有用ic13+ic17训练的,但是我去看了这两个数据集的标注,ic13的gt是四个数字,ic17是8个数字,但是这两个数据集在数据加载的时候都是通过一个函数加载的。他们读gt坐标的时候不产生冲突吗?  2.craft该方法是可以识别任意形状的文字框,我看您使用ic13,ic15,ic17,SynthText等数据集训练的,这些文字没有那种曲形(例如星巴克),但是训练出来模型也能识别这些形状的。能给个解释吗? 3.如果我想训练total_text数据集,这个代码能直接像ic15那样拿来训练吗?或者是我想加强模型对任意形状文字的检测能力,我该如何训练呢? 期待您的回复,谢谢。
I train the vgg16 model on Synthdata follow the readme.But when I test the model,I find I have no test data。In the test.py the test data config as follow: image_list,...
你好,我想要在最后的输出获得字符级的bbox,而不是整个word的bbox,但是在调用下面这个函数的时候,我不知道该怎么修改watershed.py文件里面的分水岭算法,不知道应该调用哪个函数,里面有一个getDetCharBoxes_core函数,但是和word没区别,我是应该调用watershed这个函数么?但是它的输入是什么?可以解答一下么?感谢 
Hi @backtime92 I am trying to replace the backbone from vgg16_bn to squeezenet, I am not sure though which are the best slices to take , can you help ?...
Hi, I tried your code trainSyndata.py on data http://www.robots.ox.ac.uk/~vgg/data/scenetext/. However, after 16 epochs, the hmean on IC2013 still around 65%, which is far below your results(76.33%). Could you suggest what...
Hi all has anyone tried mobilenet or other lightweight backbone for training CRAFT ?
In readme.md, you said "download the Syndata(I will give the link)", but I do not see the link. Can you provide it? I want to re-train synth