Bboxes with too high overlap
Hello,
I've recently started to use rf-detr and i observe that in my case there are many overlapping bboxes and sometimes the overlap is perfect and 2 labels are assigned to the same object. Any suggestions about this issue?
I have experienced the same issue
+1
@isaacrob-roboflow Any suggestions about this, or possible solutions?
hi! interesting issue :) can you share some examples?
if you know ahead of time that you have a prior on maximum overlap, you COULD always apply NMS on top of RF-DETR outputs. that forces your prior on max overlap to be observed, which should improve mAP for cases where you have such a prior (at the cost of course of slower inference).
however, the model SHOULD learn not to do that, given sufficient data. I would love to see more about your usecase!
@isaacrob-roboflow, as I mentioned in Chicago, I’ve experienced similar issues myself.
@panagiotamoraiti, @ginobili, @ews-grmunjal — applying NMS should resolve your issue. You can do this by calling .with_nms(threshold=0.5). For example:
from PIL import Image
from rfdetr import RFDETRBase
model = RFDETRBase(...)
image = Image.open(...)
detections = model.predict(image, threshold=0.5).with_nms(threshold=0.5)
Thanks a lot for your help. In my case training for more epochs has reduced significantly the high overlapping predictions. I have a dataset of ~1000 images with ~5-25 instances in every image. When i trained for 10 epochs i got mAP metrics over 90% but i had many high-overlapping predictions. For now I've trained for 20 epochs and most high-overlapping bboxes have disappear. However some still remain, so applying NMS on top may help. Also, maybe i should train more because i think my dataset is big enough.
@panagiotamoraiti I would imagine 10 epochs is under training. I would encourage you to train for longer. We don't use any schedulers so honestly you can just set it to go for 100 and take the best checkpoint, that'll give the same result as if you'd set it to the optimal number originally.
@SkalskiP how long are you training yours? I totally buy that this behavior might happen early in training but then go away with additional training.
@isaacrob-roboflow I agree, that sounds plausible. I'm starting work on the Basketball AI project this week and will be training quite a few models. I’ll try to keep an eye on this as I go.
@ews-grmunjal how large is your dataset? how many objects appear in a single image?
Hello, i've observed that the issue persists in another dataset. Now i fine-tune the pre-trained rfdetr on an animal dataset i downloaded from kaggle.
These are my classes: ['Brown bear', 'Red panda', 'Eagle', 'Deer', 'Owl', 'Butterfly', 'Monkey', 'Duck', 'Sparrow', 'Tiger', 'Woodpecker', 'Tortoise', 'Fox', 'Squirrel', 'Rabbit', 'Canary', 'Raccoon', 'Parrot']
The training stopped at epoch 10 using early stopping and achieving map@50 0.927 and map@50-95 0.833. I get several examples where more than one class ids have been predicted for one animal. If you are interested in searching that issue i would be happy to help and provide further information.
Raising the confidence threshold, which now is 0.25, can reduce such examples, but it doesn't eliminate them completely. I am curious why the network exhibits such behavior.
@isaacrob-roboflow
I have the same problem on the default model trained on coco