SparseInst
SparseInst copied to clipboard
在COCO数据集上使用sparse_inst_r50vd_dcn_giam_aug.yaml重头训练,精度并没有达到37.9,低了十几个点
未使用fp16,batch_size=32,BASE_LR: 0.00005 STEPS: (210000, 250000) MAX_ITER: 270000 WEIGHT_DECAY: 0.05,其余参数配置未改,训练了212559iter,AP只有12.6,这是怎么回事呢?fp16对最终精度结果有很大的影响么? [02/16 06:24:14] d2.evaluation.fast_eval_api INFO: Evaluate annotation type segm [02/16 06:24:34] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 19.81 seconds. [02/16 06:24:34] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results... [02/16 06:24:36] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 2.17 seconds. [02/16 06:24:36] d2.evaluation.coco_evaluation INFO: Evaluation results for segm:
AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|
12.604 | 22.949 | 12.255 | 3.839 | 12.047 | 19.988 |
Hi @116022017144, FP16 is not crucial to the final performance. Could you provide the log file with me?
未使用fp16,batch_size=32
未使用fp16,batch_size都能有32。泰裤辣!不使用fp16,我只能bs=4,开了就bs=8。即使选一块3090,不开fp16也就bs=8