rf-detr icon indicating copy to clipboard operation
rf-detr copied to clipboard

Hyperparameters for Class Imbalance

Open RhysDeLoach opened this issue 3 months ago • 6 comments
trafficstars

Hey y’all,

I am using RF-DETR on a dataset that has two classes with a decent class imbalance. This imbalance isn’t hurting mAP50, but I am seeing a 10% difference in mAP. I know that some sort of per-class augmentation could help, but I am hoping to fix this with my training parameters instead. I’ve already played with gamma values, alpha values, varifocal and ia bce loss, class loss coefficient, class cost, and tried implementing per-class weights for loss. Does anyone know any other parameters that I may be able to try?

Best, Rhys

RhysDeLoach avatar Jul 31 '25 14:07 RhysDeLoach

I can't think of anything that would be especially useful for RF-DETR vs dealing with class imbalances for other models. I might suggest building a custom dataloader to oversample from the underrepresented class. it may also be the case that your model needs to train longer .. often models learn the easier stuff first, and the overrepresented class may be acting as the easier task to learn, so letting it train longer would address that issue. of course I have no idea what the rest of your training setup looks like, don't know if it's already overfitting etc :)

any more info you provide would be helpful!

isaacrob-roboflow avatar Jul 31 '25 21:07 isaacrob-roboflow

Hey Isaac,

I understand. I'll have to give that custom data loader approach a try. I have roughly 30k instances of class A and 5k instances of class B. I used the following augmentations...

Flip: Horizontal 90° Rotate: Clockwise, Counter-Clockwise Crop: 0% Minimum Zoom, 20% Maximum Zoom Noise: Up to 1.52% of pixels

As for my training parameters, I trained 25 models with different combinations of the following...

Loss Function: IA BCE or Varifocal Alpha: 0.25 - 0.85 Gamma: 2 - 4 Class Cost: 2 - 4 Class Loss Coeff: 1 - 3 Batch Size: 16 Grad Accum Steps: 1 Learning Rate: 1e-4 Early Stopping Always On Custom Loss Weights Per Class

Model: Base until most recent release. Now medium.

All other parameters were default. As for the results, the longest any model went was 26 epochs. Test loss values seemed to plateau around epoch 10. My mAP50 for both classes was about 99%, but my mAP50-95 for class A was 89-90 while class B was 80-81. I really appreciate the help.

Also, on a side note, while trying to adjust some of these training parameters, I found a couple of what I believe to be bugs and was able to find workarounds (maybe fixes?) for them. I noticed that the GitHub was fairly inactive, I assume because y'all were working on finalizing this most recent release, so I reported them to Mr. Ford in the Roboflow forums. Was that information passed along to y'all?

Best, Rhys

RhysDeLoach avatar Aug 01 '25 13:08 RhysDeLoach

Please open up issues for bugs encountered in the open source version :)

isaacrob-roboflow avatar Aug 01 '25 15:08 isaacrob-roboflow

Hey Issac,

Will do! I already posted one about run_test implementation a little while ago. However, I don't know if that one is so much of a bug or a feature request. The line is kind of blurry there.

Best, Rhys

RhysDeLoach avatar Aug 01 '25 16:08 RhysDeLoach

Hi, how did you implement a custom weight loss for each class?

KhairulM avatar Aug 22 '25 04:08 KhairulM

@RhysDeLoach Hi, can you share the hyperparameters that worked for you in solving class imbalances?

KhairulM avatar Oct 16 '25 07:10 KhairulM