facenet
facenet copied to clipboard
Update train.py
DATA PARALLEL
When I trained the FaceNet model using your repository, the default batch size of 128 was too much memory for just one GPU.
Instead, I added the power of nn.DataParallel not only to be able to train the model on a batch size of 128, but it also speeds the process of training.