Wav2Lip
Wav2Lip copied to clipboard
wav2lip_train training error:Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])
Using my personal data set, the batch size in hparams.py is 16
Have you been able to resolve this issue? I am also using a custom dataset and getting the same error.
Have you been able to resolve this issue? I am also using a custom dataset and getting the same error.
No,I haven't
I just resolved this issue by adding syncnet.eval() after `syncnet = SyncNet().to(device)
@cncbec @ulucsahin I am also using a custom dataset and my guess is either this error was not encountered with the original dataset or the model was being set to validate in the eval functions included for LRS2 data which I'm not using
I experienced the same problem, and whenever sync loss is reduced to 0.75 or less, an error is reported
!!! 因为pytorch在训练时会使用BatchNormal,就要保证batch_size>1,所以在model.train()下会触发_verify_batch_size()判断。 最后解决只需要准备两个以上的样例
@nicoleng
I just resolved this issue by adding syncnet.eval() after `syncnet = SyncNet().to(device)
@cncbec @ulucsahin I am also using a custom dataset and my guess is either this error was not encountered with the original dataset or the model was being set to validate in the eval functions included for LRS2 data which I'm not using
Hi, Sorry for bothering you, but since you are using a custom dataset, can you please show me the structure of the filelist for the custom dataset. because when i try to train on my own dataset, the progress is 0% always and i think it is an issue with the filelist(Ex: train.txt file)