D-FINE icon indicating copy to clipboard operation
D-FINE copied to clipboard

关于模型训练输出结果的疑问

Open zwcity opened this issue 6 months ago • 8 comments

你好,我在训练的时候看到模型的验证结果如下所示:我想确认下,1、metricsP{f1、p、r....}和coco的ioumetric都是验证集的指标吗?2、如果是,其中metrics的TP、FN的数量之和为什么和我的验证集的目标数量明显不符合呀,远远小于我的实际数量。 Test: [0/9] eta: 0:00:36 time: 4.0821 data: 3.7480 max mem: 20162 Test: [8/9] eta: 0:00:00 time: 0.7457 data: 0.4304 max mem: 20162 Test: Total time: 0:00:06 (0.7601 s / it) Metrics: {'f1': 0.7608200455580866, 'precision': 0.755656108597285, 'recall': 0.7660550458715596, 'iou': 0.5345214381814003, 'TPs': 167, 'FPs': 54, 'FNs': 51} Averaged stats: Accumulating evaluation results... COCOeval_opt.accumulate() finished... DONE (t=0.52s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.539 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.764 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.587 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.408 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.585 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.533 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.520 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.709 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.783 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.739 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.771 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.784 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.968 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.863 best_stat: {'epoch': 189, 'coco_eval_bbox': 0.5390755703179622} Training time 7:53:39

zwcity avatar Jul 02 '25 02:07 zwcity

目前的验证代码和最开始的版本有过两次升级,最近的一次是: https://github.com/Peterande/D-FINE/pull/287 @MiXaiLL76 Could you help me check what the problem is? Thanks

Peterande avatar Jul 17 '25 10:07 Peterande

@zwcity Hello! Let's try to figure it out more specifically.

It is assumed that the phrase:

Metrics: {'f1': 0.7608200455580866, 'precision': 0.755656108597285, 'recall': 0.7660550458715596, 'iou': 0.5345214381814003, 'TPs': 167, 'FPs': 54, 'FNs': 51}

It is output from here: https://github.com/Peterande/D-FINE/blob/d6694750683b0c7e9f523ba6953d16f112a376ae/src/solver/det_engine.py#L233 And it is calculated here https://github.com/Peterande/D-FINE/blob/master/src/solver/validator.py#L13

So I don't think that my PR influences the output of these metrics in any way. It feels like these calculations are performed by a different validator class than the one I implemented.

MiXaiLL76 avatar Jul 17 '25 14:07 MiXaiLL76

@Peterande it seems like this issue is related to the output

Metrics: {'f1': 0.7608200455580866, 'precision': 0.755656108597285, 'recall': 0.7660550458715596, 'iou': 0.5345214381814003, 'TPs': 167, 'FPs': 54, 'FNs': 51}

And it's not related to coco validation. Or maybe I don't understand something. I'm ready to provide any help, let's wait for a response from @zwcity

MiXaiLL76 avatar Jul 17 '25 14:07 MiXaiLL76

Got it. I'm gonna check the updates related to this Validator. @MiXaiLL76

Peterande avatar Jul 18 '25 07:07 Peterande

Got it. I'm gonna check the updates related to this Validator. @MiXaiLL76

Look, now I have this function: https://github.com/MiXaiLL76/faster_coco_eval/blob/main/faster_coco_eval/core/faster_eval_api.py#L206

Maybe it will help you, or replace your additional validator?

(This is a modified duplicate of the function from https://github.com/roboflow/rf-detr/blob/1e63dbad402eea10f110e86013361d6b02ee0c09/rfdetr/engine.py#L170)

MiXaiLL76 avatar Jul 18 '25 08:07 MiXaiLL76

Metrics: {'f1': 0.7608200455580866, 'precision': 0.755656108597285, 'recall': 0.7660550458715596, 'iou': 0.5345214381814003, 'TPs': 167, 'FPs': 54, 'FNs': 51}

@zwcity 这部分plot的数量×卡数以后是你数据集的量吗?如果是那可能是这部分指标没有在多卡间collect导致的,但后续的coco eval是正确的

Peterande avatar Jul 18 '25 08:07 Peterande

Got it. I'm gonna check the updates related to this Validator. @MiXaiLL76

Look, now I have this function: https://github.com/MiXaiLL76/faster_coco_eval/blob/main/faster_coco_eval/core/faster_eval_api.py#L206

Maybe it will help you, or replace your additional validator?

(This is a modified duplicate of the function from https://github.com/roboflow/rf-detr/blob/1e63dbad402eea10f110e86013361d6b02ee0c09/rfdetr/engine.py#L170)

That's great. It seems the current pre - Validator only takes the results of a single GPU into account.

Peterande avatar Jul 18 '25 08:07 Peterande

def compute_metrics(self, extended=False) -> Dict[str, float]: filtered_preds = filter_preds(copy.deepcopy(self.preds), self.conf_thresh) metrics = self._compute_main_metrics(filtered_preds) if not extended: metrics.pop("extended_metrics", None) return metrics

@Peterande 作者,你好,首先恭喜你们取得的成就。 我这边有一个问题,麻烦解答一下。就是计算metrics的时候,我发现自己在C++实现中模型的输出,在后处理转换后。,我需要将其按照scores排序,取出前n个(n可以是20, 50, 100),或者说通过得分阈值(阈值设置为0.1, 0.2等)过滤一下。计算的map才正确。如果直接使用Q=300,则计算的map不正确。值偏低。请问这是为什么?

LeBron-Jian avatar Aug 02 '25 04:08 LeBron-Jian