dinov2-finetune icon indicating copy to clipboard operation
dinov2-finetune copied to clipboard

Added wandb, metrics, mask2former

Open Alighieri1231 opened this issue 6 months ago • 1 comments

So we go again, this is my first pull request so sorry if I need to be more detailed; I really like the project!

  • Added Dice score as metric
  • Added IoU and Dice score for foreground and classes
  • Added IoU calculation from nnUnet library

The reason why I added so many dice and IoU is because all of them are different! So I need to analyze which one is the correct but in the meantime I wanted to train all the cases (you can see the average and std of the curves because I did cross-validation) . Screenshot 2025-10-19 at 6 25 00 PM

  • Added wandb logs for improvement of visualization

  • Added mask2former as a new decoder:

  • For this I needed to add dinov3 library, in order to run the code you need to compile 'MultiScaleDeformableAttention' once, for doing this you can do:

%cd /dinov3-finetune/dinov3/eval/segmentation/models/utils/ops/ !python setup.py build install

Once you do that you will be able to run mask2former. An example of running this new code is in run_gpu1_mask2former.sh

I achieve similar results as using LoRA with FPN and the head, I did not have time to train the mask2former with LoRa, my dataset was not too big so maybe saturation is gonna happen. Screenshot 2025-10-19 at 6 20 54 PM

  • Added 32-bit precision for increased batch size.
  • Added some sh files to run different experiments at a time:

I am including two SLURM files if you want to run several experiments in a cluster at a same time (sh files).

Let me know If I need to be more organized, I would love to have this project growing! Thanks. What I am doing for the next iteration is implementing classic U-nets as benchmarks, so we can have everything in one repository.

Alighieri1231 avatar Oct 19 '25 22:10 Alighieri1231

Hi again, thank you for adding all these components, I am glad it was interesting for you! Maybe we or I could help you here, could clean up the code and some of the slurm bash scripts as different users might have different ways of running their experiments.

RobvanGastel avatar Oct 22 '25 08:10 RobvanGastel