insightface
insightface copied to clipboard
Using vggface2 based embedding in inswapper is not prducing correct results
Hi,
I'm trying to use vggface2 based embedding using facenet-pytorch Here is the code to generate:
import torch
from facenet_pytorch import MTCNN, InceptionResnetV1
class FaceEmbed:
'''https://github.com/timesler/facenet-pytorch'''
def __init__(self):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# If required, create a face detection pipeline using MTCNN:
self.mtcnn = MTCNN(device=device)
# Create an inception resnet (in eval mode):
self.resnet = InceptionResnetV1(pretrained='vggface2').eval()
def get(self, img):
# Get cropped and prewhitened image tensor
img_cropped = self.mtcnn(img)
# Calculate embedding (unsqueeze to add batch dimension)
img_embedding = self.resnet(img_cropped.unsqueeze(0))
return img_embedding.detach().numpy().flatten()
Then using a custom inswapper, I have replaced this line https://github.com/deepinsight/insightface/blob/da738727d5fe26d305499c311bd05cb51c3936d5/python-package/insightface/model_zoo/inswapper.py#L50 to
latent = embed.get(source_image).reshape((1,-1))
The resulting swap (Incorrect in picture) is not at all good. Any pointers?