How to finetune the pretrained model?
I trained the model on MS1MV2 dataset but performance is bad on my dataset. I have dataset of multiple thousand person's images collected in graysacle.
I am trying to improve performance on my dataset by finetuning. How to do that?
@PR451 Have you got the same reuslt with that of paper? We get lower performance on IJBC (96.73 vs paper 96.89) and tinyface (rank1 67.59 vs paper 68.21) trained with MS1MV2, . We reproduce the same result using the model released.
Below is our script, we use V100, 2 gpus.
python main.py
--data_root /home/data
--train_data_path faces_emore
--val_data_path faces_emore
--prefix ir101_ms1mv2_adaface
--use_mxrecord
--gpus 2
--use_16bit
--arch ir_101
--batch_size 512
--num_workers 16
--epochs 26
--lr_milestones 12,20,24
--lr 0.1
--head adaface
--m 0.4
--h 0.333
--low_res_augmentation_prob 0.2
--crop_augmentation_prob 0.2
--photometric_augmentation_prob 0.2
I did not check the performance on IJBC/TinyFace. I got similar average accuracy as 5 validation datasets though. I am using same script as you.
Hi, @PR451 and @ZonePG, I hope you are doing well. I wonder if you could help us or if you can share with us how to avoid the problems with custom dataset #75 or #64. Maybe, could you tell us or share how did you grant the training with out error on dimension in outputs? or if did you get to use a validation custom data?
Regards