facenet-pytorch
facenet-pytorch copied to clipboard
How to train and return 128 embedding?
Previously I was working on the https://github.com/davidsandberg/facenet, but there is a limitation where the repo is base on tensorflow 1.x and latest hardware for Geforce 30 series and above is not supported in tensorflow 1.x. So I just wonder can I train from the scratch using this repo (pytorch version) and change the output dimension to 128 embeddings?
What do you need 128-dim embeddings for?
It will probably be easier to either:
- Add a single linear layer at the end of the model to do the mapping down to 128, freeze all the existing layers, and finetune, or
- Use PCA to reduce the dimension
Training from scratch will be extremely involved just to get a smaller output dimension. I'd recommend just using PCA.
What do you need 128-dim embeddings for?
It will probably be easier to either:
- Add a single linear layer at the end of the model to do the mapping down to 128, freeze all the existing layers, and finetune, or
- Use PCA to reduce the dimension
Training from scratch will be extremely involved just to get a smaller output dimension. I'd recommend just using PCA.
Thanks for the reply, I was planning to retrain with other demographic face dataset, the davidsandberg one is trained mainly on Caucasian, when come to Asian face, the performance is poor, so I was asking whether this pytorch version of facenet able to retrain from the scratch since davidsandberg one is limited to tensorflow version 1 and not compatible with latest nvidia cards.
@michealChin hi did you have a chance of retraining this repo? please let me know about it . thanks