YOLOv3_PyTorch
YOLOv3_PyTorch copied to clipboard
Full implementation of YOLOv3 in PyTorch
Hello. First of all, thank you for this awesome repo! I have one question. I want to change the backbone from darknet to something else. After inspected and tested the...
https://github.com/espectre/YOLOv3_PyTorch/blob/master/README.md This is the result of my training.
我原来一直在根据 https://github.com/eriklindernoren/PyTorch-YOLOv3 这个来做训练、预测,但是一直没有解决多卡训练的问题,我发现单纯的添加: model = nn.DataParallel(model, device_ids=args.device) model.cuda() 这两行,并不能为训练速度带来提升,我看您的代码有支持多卡训练,能帮忙解答下吗?感谢!
@BobLiu20 Hi , nice work, i have one question you said you did not use original darknet cfg to load the model, and you have not used and C implementation...
I've gotten image recognition to work at multiple frames/second, using a GTX 1060 with 6GB of memory. Now I'm trying to train a custom classifier but I keep running out...
@BobLiu20 How are you? I have question that i need a yolo v3 that is implemented completely on pytorch, and does not use darknet framework and its cfg parser. So,...
I was confused about why use BCE to compute the loss of 'x' & 'y', could it be MSE? https://github.com/BobLiu20/YOLOv3_PyTorch/blob/c6b483743598b5f64d520d81e7e5f47ba936d4c9/nets/yolo_loss.py#L55
When I use the code to train the voc2007 dataset,I get the result as follows:         And the detected result is none!Anyone can...
我有一个疑问
在代码的yolo_loss.py中的get_target函数中为什么是`noobj_mask[b,anch_ious>ignore_threshold]=0`,而不是`noobj_mask[b,anch_ious>ignore_threshold,gi,gj]=0`,不应该只在负责预测的网格上gt和anchor的overlap才可能超过阈值吗,如果你那样写岂不是所有网格的那个anchor在计算置信度损失的时候都被忽略了。