SimSwap
SimSwap copied to clipboard
Why do I use the model trained on the 512-cropped data set to make predictions, and the result is so The result is so unsatisfactory?
I use train.py to train the model, and get the 528000_net_D.pth, 528000_net_G.pth, 528000_optim_D.pth, 528000_opt_G.pth, then use the 528000_net_G.pth to make predictions by test_wholeimage_swapsingle.py. Why is the result of changing faces like this?