face-vid2vid
face-vid2vid copied to clipboard
There is a problem with the generated video, what is the cause?
Thank you for your shared model!
I used your pre-trained model. What's the problem?
thanks!
Which did you use, motion transfer or reconstruction? I guess there is problem with your source image.
source image. motion transfer
run command is :python evaluate.py --ckp=100 --source=005.jpg --driving=datasets/vox/test/id10343#TFIZ9vWg6EE#003826#003939.mp4
This is strange. You can try other source/driving to see which causes the problem. Several reasons are possible:
- driving has bad illumination
- source/driving face should be resized
- source jpg format image is not correctly read
- Asian people are out of the training dataset distribution so not properly modeled
According to your suggestion, I changed the driving video and the original image, but the result is still terrible.
generated:
drive video:
source image:
run command:
python evaluate.py --ckp=100 --source=test.png --driving=datasets/vox/test/id10070#FWHkEnS8v-M#001286#001433.mp4
How do I understand it here?
Now I think this is due to environment problems such as Pytorch version...For some reasons the pretrained model or the image is not correctly loaded. The headpose estimator is used during training so it's irrelevant.
I'm using torch==1.6. What about you?
You can update all your packages to latest version. Besides you can re-clone the repo to confirm there is no code change.
thanks @zhengkw18
I made a mistake in solving the problem:
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.