Problem at background in face video output
Hi guys, I have some problems when using few shot vid2vid. After face traning, I run the testing with driving video in FaceForensics dataset. At the video output, face is good but the background oscillate, not stable. Anyone meet my problem and how can I fix that? Thankyou
Have you solved the problem? I had the same problem
I meet this problem too. It seems caused by warping the face boundary using the predicted optical flow. The picture inpainting works should be helpful to diminish this drawback, as we can inpainting the missing background, I am still trying.
hey can you share the trained model? I meet the problem :https://github.com/NVlabs/few-shot-vid2vid/issues/64.
I meet the same problem. Have you solved it?