Wav2Lip
Wav2Lip copied to clipboard
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020.
Using cuda for inference. Reading video frames... Number of frames available for inference: 167 LLVM ERROR: Symbol not found: __svml_cosf8_ha I traced it to the llvmlite library I looked all...
Hello, I want to train Syncnet with the number of image sequences at 3 and 7, but I don't know if my configuration is correct. In the case of 5...
Hey folks - really love Wav2Lip, but just like everyone else, I wish it had commercial usage rights and produced higher quality output. Just thought I'd share this if it's...
报错信息:(Error Msg:) ``` Traceback (most recent call last): File "wav2lip_train.py", line 371, in train(device, model, train_data_loader, test_data_loader, optimizer, File "wav2lip_train.py", line 220, in train g = model(indiv_mels, x) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py",...
Dear author, Thanks for sharing the excellent work. I found that when using my personal video, there is a clear box region around the mouth in the output result, see...
I have been trying the last days with both wav2lip HD (not in auto) and retalker, and found that both are slow and very GPU consuming. I would like to...
 有没有大佬帮忙看看!
Hello, has any one else experienced this issue? This is occurring with both lipsync_expert discriminator and custom training discriminator. I have about 4 minutes of clips (anime) that I have...
I trained with my custom dataset, the dataset is quiet small with only one face. When I tried inference, I am getting a grey box on the face in video...