Wav2Lip icon indicating copy to clipboard operation
Wav2Lip copied to clipboard

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020.

Results 152 Wav2Lip issues
Sort by recently updated
recently updated
newest added

i use hdtf for wav2lip288 training,nearly 170 0000 pictures ,16hours my syncnet eval loss is 0.3 and my L1 eval loss is currently 0.019051536196276822 and my sync eval loss is...

The chin sometimes looks like a straight line for all the generated videos. For example: https://github.com/Rudrabha/Wav2Lip/assets/8099731/5730a01e-a6e1-48fa-8149-18319dbc419a And this: Looks very unnatural. How to solve this issue? Any idea?

Traceback (most recent call last): File "preprocess.py", line 33, in device='cuda:{}'.format(id)) for id in range(args.ngpu)] File "preprocess.py", line 33, in device='cuda:{}'.format(id)) for id in range(args.ngpu)] AttributeError: module 'face_detection' has no...

Can someone please help me i get this issue all the time. I have a face on the video on all frames...

After the following print stack, the process(inference.py) is getting killed. ``` wav Using cpu for inference. Reading video frames... Number of frames available for inference: 652 (80, 1190) Length of...

def datagen(frames, mels): img_batch, mel_batch, frame_batch, coords_batch = [], [], [], [] if args.box[0] == -1: if not args.static: face_det_results = face_detect(frames) # BGR2RGB for CNN face detection else: face_det_results...

> 应该是librosa版本的问题,但是老版本的安装上不上,我一直很苦恼怎么解决这个问题 修复此问题不走: 1. 修改依赖版本号: librosa==0.10.1 numpy==1.24.3 opencv-contrib-python>=4.2.0.34 opencv-python>=4.7.0.72 torch==1.11.0 torchvision==0.12.0 tqdm==4.45.0 numba==0.59.0 2. 修改audio.py 第一百行代码: return librosa.filters.mel(sr=hp.sample_rate, n_fft=hp.n_fft, n_mels=hp.num_mels, fmin=hp.fmin, fmax=hp.fmax) 调整参数 在运行 就没问题了。 _Originally posted by @TzyTman in...

https://github.com/Rudrabha/Wav2Lip/assets/646709/a1ea8e32-3b89-4c21-95f8-bd169489a3be

When training the hq_wav2lip_train.py in lrs2,the percep/Fake/Real loss are always around 0.69, anybody knows how to solve the problem?