DBNet.pytorch
DBNet.pytorch copied to clipboard
Opendataset问题
您好,使用open dataset 数据集格式训练icdar2015 迭代时间比icdar格式数据集慢,open dataset CTW 数据集 内存泄漏,有解决方法吗?
@ganggang233 这套代码支持ctw1500的数据集么?是用opendataset格式?
需要自己转,但是除了icdar2015,其他数据集内存泄漏 无法训练
@liudatutu 使用icdardateset 读取x1,y1,x2,y2.....xn,yn 也可以训练,但是total,ctw1500无法训练到论文精度
@ganggang233 我刚把CTW的转换了,可训练loss很奇怪,完全是像没训练一样 021-06-10 14:58:04,225 DBNet.pytorch INFO: [1/1200], [166/1000], global_step: 166, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.00037, time:0.80 2021-06-10 14:58:05,026 DBNet.pytorch INFO: [1/1200], [167/1000], global_step: 167, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000370222, time:0.80 2021-06-10 14:58:05,828 DBNet.pytorch INFO: [1/1200], [168/1000], global_step: 168, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000370444, time:0.80 2021-06-10 14:58:06,625 DBNet.pytorch INFO: [1/1200], [169/1000], global_step: 169, speed: 1.3 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000370667, time:0.80 2021-06-10 14:58:07,442 DBNet.pytorch INFO: [1/1200], [170/1000], global_step: 170, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000370889, time:0.82 2021-06-10 14:58:08,251 DBNet.pytorch INFO: [1/1200], [171/1000], global_step: 171, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000371111, time:0.81 2021-06-10 14:58:09,055 DBNet.pytorch INFO: [1/1200], [172/1000], global_step: 172, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000371333, time:0.80 2021-06-10 14:58:09,861 DBNet.pytorch INFO: [1/1200], [173/1000], global_step: 173, speed: 1.2 samples/sec, acc: 0.1761, iou_shrink_map: 0.1761, loss: 1.0000, loss_shrink_maps: 0.0000, loss_threshold_maps: 0.0000, loss_binary_maps: 1.0000, , lr:0.000371556, time:0.81
开tensorboard 看的标签转换正确?
@ganggang233 有没有将ctw数据集转为opendata的json格式的代码? 我看是不是我写得不对?
@ganggang233 我感觉是我写ctw的polygon不太对,我的是这样的 gt=line.strip().split(',') x1 = np.int(gt[0]) y1 = np.int(gt[1]) bbox = [np.int(gt[i]) for i in range(4, 32)] bbox = np.asarray(bbox) + ([x1 * 1.0, y1 * 1.0] * 14)#14个点是相对坐标 bbox=bbox.reshape((-1,2)).tolist() text_meta['polygon']=bbox
请问大佬们解决了opendataset数据集训练的内存泄露问题了吗?
@liudatutu 使用icdardateset 读取x1,y1,x2,y2.....xn,yn 也可以训练,但是total,ctw1500无法训练到论文精度
你好 我把total-text转为ic15,但是训练50个eopch了,指标均为0,请问你遇到了吗