Run vanilla DinoV2 training with unlabelled dataset to fit specific field data
The current approach to finetune the model on downstream task is to freeze the Dinov2 backbone and finetune the head, but i want to use large unlabelled dataset to continue training the Dinov2 to fit in specific field data (i.e. medical images).
- Here I noticed continue training asks for slurm, but i want to vanilla training like the DinoV1.
- Can I really continue training on the pre-trained DinoV2 model with unlabelled dataset? Because the pre-trained models are all distilled and I noticed in the issue#204
Originally posted by @qasfb in https://github.com/facebookresearch/dinov2/issues/204#issuecomment-1707792318
, they replied that we currently can not do this continue training because they didn't provide the training head at the moment.
I hope someone can help me with this.
Best Regards.
Could you mean that it's not possible, because they did not provide the head weights?
Hey @jaime-1998 , just wanted to check in. Did you have any updates? Curious if you tried training with the unlabeled data or found a workaround. Thanks!