syncnet_python icon indicating copy to clipboard operation
syncnet_python copied to clipboard

Code not working

Open samyak0210 opened this issue 4 years ago • 12 comments

Hello,

I was using your code for a video but it was giving an error while running demo_syncnet.py file. It ran fine for the example.avi but is not running for my video. Can you help me?

image

samyak0210 avatar Feb 27 '21 11:02 samyak0210

Make sure the length of your video and audio are same.

hrzisme avatar Feb 28 '21 01:02 hrzisme

Hey, Thank you for pointing that out. I downloaded the video from youtube so didn't expect this error. I tried to do that using DaVinci video editing tool but since the difference in lengths is very small, its not able to detect to that precision. Can you suggest any better tool? Or is it possible if I send you the video and you can give me the detected active speakers?

samyak0210 avatar Feb 28 '21 11:02 samyak0210

U can solve your problems by using ffmpeg

hrzisme avatar Mar 13 '21 02:03 hrzisme

I solved the problem by resizing he frames of the video to 224x224.

hanbaobao950123 avatar Jun 10 '21 09:06 hanbaobao950123

Hello, I have same issue, would you tell me how you fixed it? @samyak0210

EhsanRusta avatar Jul 25 '22 08:07 EhsanRusta

@EhsanRusta maybe you shuld resize your video frame to 224x224, just like the example.avi

Momotyust avatar Jul 27 '22 07:07 Momotyust

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

hannarud avatar Aug 30 '22 17:08 hannarud

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

and how to filter dataset for wav2lip?

wllps1988315 avatar Jan 19 '23 09:01 wllps1988315

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

can you share some methods for preprocessing wav2lip datasets with this project? thank you .

ThetaRgo avatar Apr 19 '23 10:04 ThetaRgo

just change

for fname in flist:
    images.append(cv2.imread(fname))

to

for fname in flist:
    images.append(cv2.resize(cv2.imread(fname), (224, 224)))

in SyncNetInstance.py

the model was not meant to work with other shapes

MisterCapi avatar Apr 29 '24 11:04 MisterCapi

注意,用途相当demo_syncnet.py有限。它仅适用于格式(即大小、大致数量)与example.avi。为了获得任何视频的结果,您需要将其提交到整个管道,如自文件后面所指出的: 完整管道:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

这里run_pipeline.py将适当的方式完成对视频进行拆除(按场景划分、检测人脸、手术等),以便run_syncnet.py能够进行其工作。

以及如何过滤wav2lip的数据集?

请问你知道如何过滤数据集了吗

guo-king666 avatar May 07 '24 02:05 guo-king666

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README: Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

and how to filter dataset for wav2lip?

Hey, were you able to filter dataset?

kashishnaqvi10 avatar May 08 '24 21:05 kashishnaqvi10