cavaface
cavaface copied to clipboard
About the pre-trained model
Thank you for this work. I want to use the pre-trained model of AttentionNet-IRSE-56/92 from the MODEL_ZOO.md for fine-tuning. Where can I get the pre-trained model?
The AttentionNet-IRSE pretrained model is not avaiable. And I saved the model use torch.jit.save
api which seem like not support for finetuning. You can retrain it use this to save model.https://github.com/cavalleria/cavaface.pytorch/blob/13182ecc349ca050fa5a877045390a41037313a7/train.py#L352
The AttentionNet-IRSE pretrained model is not avaiable. And I saved the model use
torch.jit.save
api which seem like not support for finetuning. You can retrain it use this to save model.https://github.com/cavalleria/cavaface.pytorch/blob/13182ecc349ca050fa5a877045390a41037313a7/train.py#L352
OK,thanks! Can the models from Model Zoo.md not be finetuning? Such as IR-SE50 etc.
The models from Model Zoo.md is saved using api torch.jit.save
which can evaluate it without defining model python files, but it seem like cannot finetune.😂
You can load the weights from torch.jit.load and set these in to the state_dict of a real model (from BACKBONE_DICT) in order to support finetuning.
You can load the weights from torch.jit.load and set these in to the state_dict of a real model (from BACKBONE_DICT) in order to support finetuning.
Oh, that's great. Can you provide the pre-trained model of AttentionNet-IRSE-56/92 so that I can fine-tune it? Thank you @cavalleria @xsacha
@bobo0810 Should be like this:
mymodel = AttentionNet_IRSE_92()
mymodel.load_state_dict(torch.jit.load('AttentionNet_IRSE_92_torchscript.pt').state_dict())
See: https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.state_dict
@cavalleria hello,according to data augmentation result in model_zoo.md,baseline is the best, so we do not need to do any data augmentation?
@cavalleria hello,according to data augmentation result in model_zoo.md,baseline is the best, so we do not need to do any data augmentation?
When trained for the same amount of time, from scratch, baseline is better. This will be the case for any sort of classifier AFAIK. This is also the case if you double the size of the dataset. There is more data to learn, so it will take longer.
Augmentation requires either a fine-tune on a baseline (pre-trained) or much longer training to see optimal results.
@cavalleria hello,according to data augmentation result in model_zoo.md,baseline is the best, so we do not need to do any data augmentation?
When trained for the same amount of time, from scratch, baseline is better. This will be the case for any sort of classifier AFAIK.
Augmentation requires either a fine-tune on a baseline or longer training to see optimal results.
@FelixZhang7 Although model.md is like this, my experimental experience is that adding data augmentation will improve performance. @xsacha In order to make the network converge better, data augmentation should be performed on the pre-trained model and continue training
@xsacha @cavalleria @bobo0810 Hello, I want to find a network with a better accuracy than resnet50-IR, and it is better to have similar inference speed,could you give me some advice ?
@FelixZhang7 AttNet-56-IR It is similar inference speed to resnet50 on GPU and better accuracy.
@xsacha @cavalleria @bobo0810 Hello, I want to find a network with a better accuracy than resnet50-IR, and it is better to have similar inference speed,could you give me some advice ?
I haven't compared the speed of the networks yet. Refer to modelzoo.md, I am using attention_irse.
@bobo0810 efficient-b1/mobilenetv3(enlarge width) and so on.
@xsacha I‘ve tried AttNet-56-IR,almost the same acc compare with resnet50-IR...