facenet-pytorch icon indicating copy to clipboard operation
facenet-pytorch copied to clipboard

How to train and return 128 embedding?

Open michealChin opened this issue 1 year ago • 3 comments

Previously I was working on the https://github.com/davidsandberg/facenet, but there is a limitation where the repo is base on tensorflow 1.x and latest hardware for Geforce 30 series and above is not supported in tensorflow 1.x. So I just wonder can I train from the scratch using this repo (pytorch version) and change the output dimension to 128 embeddings?

michealChin avatar May 10 '23 09:05 michealChin

What do you need 128-dim embeddings for?

It will probably be easier to either:

  1. Add a single linear layer at the end of the model to do the mapping down to 128, freeze all the existing layers, and finetune, or
  2. Use PCA to reduce the dimension

Training from scratch will be extremely involved just to get a smaller output dimension. I'd recommend just using PCA.

timesler avatar Jul 03 '23 21:07 timesler

What do you need 128-dim embeddings for?

It will probably be easier to either:

  1. Add a single linear layer at the end of the model to do the mapping down to 128, freeze all the existing layers, and finetune, or
  2. Use PCA to reduce the dimension

Training from scratch will be extremely involved just to get a smaller output dimension. I'd recommend just using PCA.

Thanks for the reply, I was planning to retrain with other demographic face dataset, the davidsandberg one is trained mainly on Caucasian, when come to Asian face, the performance is poor, so I was asking whether this pytorch version of facenet able to retrain from the scratch since davidsandberg one is limited to tensorflow version 1 and not compatible with latest nvidia cards.

michealChin avatar Jul 16 '23 07:07 michealChin

@michealChin hi did you have a chance of retraining this repo? please let me know about it . thanks

jasuriy avatar Jun 26 '24 08:06 jasuriy