weilanShi
weilanShi
> You I tried your method, but it seems to be training from scratch.
> I can't explain why you can't do it. > I think you have to use same command (configuration: --dataset, --input_scale_size, etc...) as before to continue. > But... I don't...
> I want to collect generated images individually. How can I do that ? I have the same problem. Have you solved it? Can you add my QQ: 1304020120 ?,thank...
> Traceback (most recent call last): > File "train.py", line 626, in > # device=device, ) > File "train.py", line 425, in train > evaluator = evaluate(eval_model, val_loader, config, device)...
> 根据大佬的代码训练了一版手部检测, 目前训练存在一些误检, 在完全没有手部的图像中产生了框, 并且score很高. > 我尝试使用完全没有手的样本, 把一批这种完全没有框的绝对负样本加入训练, 但计算loss的时候报错. 是否是ssd取正负样本时候没有正样本导致的错误? 我是否应该在这些绝对负样本上随机画一些background的框? > > 该如何优化这类情况? 你好,我之前做过这类,SSD工作机制是有正样本的图像才会生成负样本,如果你有训练图像上没有目标,这类图像对网络学习是不做贡献的,所以不能减小误检,你可以将正样本随机贴到容易产生误检的背景图像上。我还有一个问题:就是你在做手部检测是直接用的voc格式的数据集吗,我也是做手部检测,loss下降得非常快,感觉不正常,目前还在训练
> 可以试试把core目录下的cfg文件里 的初始权重改一下 C.YOLO.ORIGINAL_WEIGHT = "./checkpoint/yolov3_coco.ckpt" 这句等号后面改成你目前生成的ckpt文件。 也许可以我也没试过。。 > 我跑了9个epoch ,loss在8左右,之后就很难训练了,loss几乎不降了。test了一下准确率很高了 训练loss在8左右吗?我也是,跑了几百个epoch的,了loss还是很高,还没测试,不知道效果怎么样
Hello, I don't understand the purpose of the central coordinates. What else is there besides getting bbox
> @weilanShi Hi, the center point of public dataset is taken from V2V code (https://github.com/mks0601/V2V-PoseNet_RELEASE), and if you have to test on your own data, you may need to train...
> @weilanShi Hi, the center point of public dataset is taken from V2V code (https://github.com/mks0601/V2V-PoseNet_RELEASE), and if you have to test on your own data, you may need to train...
@Jessespace In the XML file of VID, occluded and generated at the bottom are different from VOC data sets. How are they generated? Do you need to change the source...