DBNet.pytorch
DBNet.pytorch copied to clipboard
A pytorch re-implementation of Real-time Scene Text Detection with Differentiable Binarization
您好,random_crop_data.py里的类EastRandomCropData在使用split_regions函数提取连续index region的时候,会忽略掉最后一段连续的index region,在一些图片上(我使用的是TD500里的IMG_0080.JPG)会导致增强后丢失一些文本区域,测试原版DB的时候也遇到了这个问题,这是个bug还是有意这么做的呢?
长文本截断,比如身份证,请问有什么办法解决么
when I train myself dataset,but I get that ," test: recall: 0.000000, precision: 0.000000, f1: 0.000000". Have you encountered such a problem?
在训练了若干个epoch后,acc: 0.9589, iou_shrink_map: 0.9052, loss: 0.7760, loss_shrink_maps: 0.1584, loss_threshold_maps: 0.0471, loss_binary_maps: 0.1470 此时对测试集进行测试,在thre、box_thre为0.01的情况下,漏检依然非常严重,请问可能问题出在哪里? 感谢您的解答!
urllib.error.HTTPError: HTTP Error 403: Forbidden
每次重新运行就重新训练了 清除了原来的best model
需要在原本的config/icdar2015_resnet18_FPN_DBhead_polyLR.yaml‘修改为 '../config/icdar2015_resnet18_FPN_DBhead_polyLR.yaml' 
Traceback (most recent call last): File "tools/train.py", line 78, in main(config) File "tools/train.py", line 37, in main train_loader = get_dataloader(config['dataset']['train'], config['distributed']) File "/content/drive/MyDrive/DBnet/DBNet.pytorch-master/data_loader/__init__.py", line 84, in get_dataloader _dataset = get_dataset(data_path=data_path,...