rf-detr icon indicating copy to clipboard operation
rf-detr copied to clipboard

Add LightlyTrain Integration for Pretraining Support

Open yutong-xiang-97 opened this issue 7 months ago • 4 comments
trafficstars

Description

Add LightlyTrain Integration for Pretraining Support

LightlyTrain is a novel framework built with PyTorch. It lets you pretrain any computer vision model on your unlabeled data, by leveraging distillation from powerful vision models and using self-supervised learning. With only a few lines of code, the community can pretrain domain-specific backbones for any downstream task with a RF-DETR backbone and beyond. We think pretraining on custom domains is a great addition to the current RF-DETR, which is why we would love to feature our integration to your README.

You can simply start pretraining RF-DETR by:

import lightly_train

if __name__ == "__main__":
    lightly_train.train(
        out="out/my_experiment",                # Output directory.
        data="my_data_dir",                     # Directory with images.
        model="rfdetr/rf-detr-base",            # Pass the RF-DETR model.
    )

and fine-tune using the checkpoint by:

# fine_tune.py
from rfdetr import RFDETRBase
from roboflow import Roboflow

if __name__ == "__main__":
    model = RFDETRBase(pretrain_weights="out/my_experiment/exported_models/exported_last.pt")
      
    model.train(dataset_dir=<DATASET_PATH>)

You can also check our docs and product page for more details.

Changes

This PR contains

  • a short intro to LightlyTrain added to the “Training” section in the README file

Type of change

  • [x] New feature (non-breaking change which adds functionality)

How has this change been tested, please provide a testcase or example of how you tested the change?

N/A

Any specific deployment considerations

N/A

Docs

  • [ ] Docs updated? What were the changes:

yutong-xiang-97 avatar Apr 15 '25 13:04 yutong-xiang-97

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Apr 15 '25 13:04 CLAassistant

we use a dinov2 pretrained backbone. the comparisons you have in your library are with imagenet-pretrained classifiers, which are much less relevant to the target task. do you have evidence that your approach helps for this model?

isaacrob-roboflow avatar Apr 16 '25 20:04 isaacrob-roboflow

Hi @isaacrob-roboflow! We now also support distilling DINOv3 into RF-DETR backbones.

liopeer avatar Oct 08 '25 09:10 liopeer

Ok. Still not useful as our backbone is a DINOv2 plus o365 pretraining. Distilling DINOv3 into that removes the pretraining.

isaacrob-roboflow avatar Oct 08 '25 13:10 isaacrob-roboflow