yolact
yolact copied to clipboard
mAP calculation for custom test dataset is 0 , is that due to "has_gt" is set to FALSE in dataset declaration ?
Hello @dbolya ,
Please help I am new to this.
In the eval.py have you included the mAP calculation, is that only valid for the validation dataset or is the function also valid for calculating Map metric for test datasets ?
i saw that in issue #76 you confirmed mAP calculations will use whatever dataset you have set in the config, so it will work for custom datasets too.
After training I wanted to evaluate my model , so i created a custom test dataset and printed the Map calculation result , but mAP results were all 0.
- To start can you please explain what the first row consists of ? are those different IoU thresholds ?
-
As far as i read the principle of mAP calculation is based on comparing the ground-truth bounding box /mask to the detected box/mask and returns a score . is that the case ? can you confirm this ?
-
If that's the case , when using a custom test dataset, you indicated that we should fix "has_gt" to False in dataset declaration ( config.py) , So in this case won't mAP calculation has no ground-truth labels to compare it to the preds values for it to process Map calculation?
Thanks,
Hello @dbolya ,
Please help I am new to this. In the eval.py have you included the mAP calculation, is that only valid for the validation dataset or is the function also valid for calculating Map metric for test datasets ? i saw that in issue #76 you confirmed mAP calculations will use whatever dataset you have set in the config, so it will work for custom datasets too. After training I wanted to evaluate my model , so i created a custom test dataset and printed the Map calculation result , but mAP results were all 0.
- To start can you please explain what the first row consists of ? are those different IoU thresholds ?
- As far as i read the principle of mAP calculation is based on comparing the ground-truth bounding box /mask to the detected box/mask and returns a score . is that the case ? can you confirm this ?
- If that's the case , when using a custom test dataset, you indicated that we should fix "has_gt" to False in dataset declaration ( config.py) , So in this case won't mAP calculation has no ground-truth labels to compare it to the preds values for it to process Map calculation?
Thanks,
hello @takwaaa I am studying this part, and I want to know how to calculate the mAP in testing dataset, and I don't know how to deal the problem about ap_data.pkl, becasue I cannot find about that, could you please give me some advice, thank you very much.