Wav2Lip icon indicating copy to clipboard operation
Wav2Lip copied to clipboard

wav2lip_train training error:Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])

Open cncbec opened this issue 1 year ago • 6 comments

image Using my personal data set, the batch size in hparams.py is 16

cncbec avatar Oct 12 '23 04:10 cncbec

Have you been able to resolve this issue? I am also using a custom dataset and getting the same error.

ulucsahin avatar Oct 12 '23 14:10 ulucsahin

Have you been able to resolve this issue? I am also using a custom dataset and getting the same error.

No,I haven't

cncbec avatar Oct 13 '23 01:10 cncbec

I just resolved this issue by adding syncnet.eval() after `syncnet = SyncNet().to(device)

@cncbec @ulucsahin I am also using a custom dataset and my guess is either this error was not encountered with the original dataset or the model was being set to validate in the eval functions included for LRS2 data which I'm not using

nicoleng avatar Oct 30 '23 11:10 nicoleng

I experienced the same problem, and whenever sync loss is reduced to 0.75 or less, an error is reported

Ezrealz avatar Dec 22 '23 07:12 Ezrealz

!!! 因为pytorch在训练时会使用BatchNormal,就要保证batch_size>1,所以在model.train()下会触发_verify_batch_size()判断。 最后解决只需要准备两个以上的样例

bjfrbjx avatar May 13 '24 07:05 bjfrbjx

@nicoleng

I just resolved this issue by adding syncnet.eval() after `syncnet = SyncNet().to(device)

@cncbec @ulucsahin I am also using a custom dataset and my guess is either this error was not encountered with the original dataset or the model was being set to validate in the eval functions included for LRS2 data which I'm not using

Hi, Sorry for bothering you, but since you are using a custom dataset, can you please show me the structure of the filelist for the custom dataset. because when i try to train on my own dataset, the progress is 0% always and i think it is an issue with the filelist(Ex: train.txt file)

Jayzeen13 avatar Aug 02 '24 18:08 Jayzeen13