yolov9
yolov9 copied to clipboard
High False Positives with Background Class in Confusion Matrix
After training my model on a single class, I noticed that it performed well on both the validation and test datasets when I looked into the predictions. However, upon closer inspection of the confusion matrix generated during training, I observed a significant number of false positives associated with the background class.
Additional Context:
In the dataset, I provided an empty label.txt
file when there was no object present in the image.
Request: I would appreciate any insights or suggestions on why this issue might be occurring and how to address it effectively. Thank you for your assistance!
@WongKinYiu, @Youho99, I would greatly value your insights on this matter. Thank you kindly.
Can you show your mAP:0.5 and F1 Score curves?
This could be due to many things... And also, How many epochs have you done your training for? Is your training in detection, or in classification?
Can you show your mAP:0.5 and F1 Score curves?
This could be due to many things... And also, How many epochs have you done your training for? Is your training in detection, or in classification?
I have posted the graphs, could you please have a look at the post again at the top?
I used the train.py file at the root level of the repo yolov9/train.py
which doesn't belong to any other folder
It's strange... What is the distribution of your dataset, in%, and in raw number of images (for train/val, and your test dataset if you have)
It's strange... What is the distribution of your dataset, in%, and in raw number of images (for train/val, and your test dataset if you have)
train: 80%, val: 10%, test: 10%
train: 4490 samples test: 560 samples val: 560 samples
And the distribution between images with detection object, and background images?
Also, is it possible to see a val_batch_label and its corresponding val_batch_predict?
And the distribution between images with detection object, and background images?
Also, is it possible to see a val_batch_label and its corresponding val_batch_predict?
Images with object: 2837 Images without object: 2776
val_batch_label and its corresponding val_batch_predict:
Label:
Prediction:
What we can say is that the confusion matrix does not agree with the val_batches
Maybe there is a bug in calculating the confusion matrix, I don't know In any case, personally, I have never experienced this
What we can say is that the confusion matrix does not agree with the val_batches
Maybe there is a bug in calculating the confusion matrix, I don't know In any case, personally, I have never experienced this
Yup, thanks for your time. I really appreciate.