SimSwapHD
SimSwapHD copied to clipboard
someone can input some test images ?
now ,i dont still start run , so someone can give me some test images, i can see the results of this train code
Sorry but looks like this training code is not working. The author of the repository does not answer and does not comment on the questions in the topic in any way and we have no pretrain model to prove that the code really works.
It works on my project, go to util.videoswap.py and set video_swap function crop_size=512, I can show you a frame of my video, which uses the face of president Obama
I did everything as you said, but there is no result. The face does not swap in the video. I changed everywhere crop_size = 224
to crop_size = 512
. And also in reverse2original.py
i changed target_mask = cv2.resize (tgt_mask, (224, 224))
to 512, 512
, if you don't change these values - you will get ValueError: operands could not be broadcast together with shapes (512,512,3) ( 224,224.1)
. How did you train the model? How many epoch, which faceset? Share your pretrained model, if it really works.
I would recommend you not to use mask since the results basically look the same and it will slow down the process. I used CelebA and randomly picked 13k images, just follow the instruction and it looked fine after around 80 epochs. The pretrained model belongs to the whole project group and I'm sorry I do not have the authorization to share with other people
dear tiansw1 you can input your trained eopch about seraval hundred ,because i find epoch is so small ,the result will have some wrong
Hello, do anyone else have new results? Please share a few images and any settings you used to get to that result.
I tried following the instructions applying them on colab but as @netrunner-exe said the face does not swap in the video.
I also applied --no_vgg_loss and lowered --lambda_rec to 5, I don't see any changes after some training.