wav2lip_288x288
wav2lip_288x288 copied to clipboard
LOSS stuck arount 0.69
I got an strange issue while training color_sync net. I have read your other issue for filtering the dataset. to [-1, 1 ] i really dont understand this. How do we do that.
- I have cut the video into less than 5sec.
- resolution of video is 760px to 1080px.
- Lr is 1e-4 also tried 1e-5
- batch size is 16.
Hi!
[-1, 1]
refers to are offset values between audio and video that you get after using code from https://github.com/joonson/syncnet_python on your videos.
I recommend running run_pipeline.py
first, as it preprocesses video to make it suitable for syncnet, and then running run_syncnet.py
from the same repo
Thanks for the help. Is running those 2 command is enough ?
Yes. There is another one which is run_visualize.py
, but it's just for visualization purpose
I have encountered the same problem, which is also not falling at 0.69. May I ask if you have solved it?
I have encountered the same problem, which is also not falling at 0.69. May I ask if you have solved it?
I am working on it. But i am not able to solve it still. Somewhere in case of mine syncnet_python not working for my avspeech dataset. Please let me know if you have solve it.
I have encountered the same problem, which is also not falling at 0.69. May I ask if you have solved it?
I am working on it. But i am not able to solve it still. Somewhere in case of mine syncnet_python not working for my avspeech dataset. Please let me know if you have solve it. I have encountered the same problem, which is also not falling at 0.69. oh my ! May I ask if you have solved it?
I have encountered the same problem, which is also not falling at 0.69. May I ask if you have solved it?
I am working on it. But i am not able to solve it still. Somewhere in case of mine syncnet_python not working for my avspeech dataset. Please let me know if you have solve it.
Hi ! did you have solved it?