ChenMaolong
ChenMaolong
大佬,你的anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 这是通过对coco数据集进行kmeans聚类产生的吗?还有一个问题就是SAM在原代码没有给出,大佬你的代码里面有了么,是在哪个python文件里面啊?
大佬,根据你的知乎https://zhuanlan.zhihu.com/p/109968578,你的K值如果是手肘法K值应该选4啊,为啥你用k=5?
大佬,我做对比试验可以将[faster-rcnn-pytorch]中input_shape改为 input_shape = [608, 608] 然后在其他程序里面也都改为 input_shape = [608, 608] 吗?这里nets.rpn.py里面class ProposalCreator(): def __init__( self, mode, nms_iou = 0.7, n_train_pre_nms = 12000, n_train_post_nms = 600, n_test_pre_nms = 3000, n_test_post_nms =...
lossnan问题?
大佬,我跑你git代码和你的数据集出现lossnan问题,Epoch 00002: LearningRateScheduler reducing learning rate to 6e-06. Epoch 2/100 202/202 [==============================] - 106s 527ms/step - loss: nan - rpn_class_loss_loss: nan - rpn_bbox_loss_loss: nan - mrcnn_class_loss_loss: 1.0970 - mrcnn_bbox_loss_loss: 0.0000e+00...
def kmeans(box, k): #-------------------------------------------------------------# # 取出一共有多少框 #-------------------------------------------------------------# row = box.shape[0] #-------------------------------------------------------------# # 每个框各个点的位置 #-------------------------------------------------------------# distance = np.empty((row, k)) #-------------------------------------------------------------# # 最后的聚类位置 #-------------------------------------------------------------# last_clu = np.zeros((row, )) np.random.seed() #-------------------------------------------------------------# # 随机选5个当聚类中心...
大佬,你pytorch yolov4 3.1版本的yolo4_voc_weights.pth权重是基于608×608的图片训练获得的还是416×416的图片训练获得的?
大佬,你pytorch yolov4 3.1版本的yolo4_voc_weights.pth权重是基于608×608的图片训练的还是416×416的图片训练的?
怎么找最优权重?
大佬您好,我用验证集计算map best_epoch_weights.pth',#91.77% = nodule AP || score_threhold=0.5 : F1=0.89 ; Recall=87.39% ; Precision=91.23% ep277-loss0.027-val_loss0.038.pth',#91.77% = nodule AP || score_threhold=0.5 : F1=0.89 ; Recall=87.39% ; Precision=91.23% 训练自动计算的map 0 TOTALLOSS VALLOSS map...
