One-Shot_Free-View_Neural_Talking_Head_Synthesis icon indicating copy to clipboard operation
One-Shot_Free-View_Neural_Talking_Head_Synthesis copied to clipboard

Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

Results 38 One-Shot_Free-View_Neural_Talking_Head_Synthesis issues
Sort by recently updated
recently updated
newest added

Dear Longhao, Thank you for this great source code. During my training with 512*512 size with Voxceleb2, I have been into trouble with this problem as follows: When this model...

How can I make an effect like http://imaginaire.cc/vid2vid-cameo/ which can rotate eyes, also I want to control the mouth or other parts of face. I draw some quiver plot, but...

I apologize for the intrusion, but I wanted to inquire about the pre-trained model you provided. Is it configured for the SPADE mode? I'm encountering some issues while training in...

I am able to synthesize neural talking heads with 'demo.py' but I am not getting a synchronized audio channel in the result.mp4 file - the audio in the driving video...

Good Day, Zhang: I encount a problem when training: My conda env: torch == 1.7.1+cu110 torchvision == 0.8.2+cu110 pytorch3d == 0.6.1 python == 3.8 The Excption Information as below: File...

I needed to use a VPN to access MediaFire, please upload it to onedrive or google drive like most others do. I can drop a PR with my upload links...

I would like to ask how much training cost will be increased by increasing the number of kp, say from 15 to 20kp. I want to train the 20kp model....

请问对source img有额外的预处理么?或者是训练过程加了额外的crop之类的数据增广? (因为测试过程发现source img采用不同方式crop,drive的结果差不多。这是怎么做到的呢?)

您好,我在尝试跑demo的时候,发现这个错误 size mismatch for first.conv.weight: copying a param with shape torch.Size([64, 3, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]). 请问如何解决呢?谢谢!

python run.py --config config/vox-256.yaml --device_ids 0,1,2,3,4,5,6,7 > cuda is available Traceback (most recent call last): File "run.py", line 83, in dataset = FramesDataset(is_train=(opt.mode == 'train'), **config['dataset_params']) File "/root/autodl-tmp/WZLCode/frames_dataset.py", line 70,...