EverybodyDanceNow_reproduce_pytorch icon indicating copy to clipboard operation
EverybodyDanceNow_reproduce_pytorch copied to clipboard

How much target data?

Open mallorbc opened this issue 5 years ago • 4 comments

What is the recommended video length for the target video?

mallorbc avatar Jun 28 '19 17:06 mallorbc

I went ahead and used 3 minutes of training video. Results are decent. Not sure if more data would make a significant difference or not though.

mallorbc avatar Jul 06 '19 18:07 mallorbc

Need guidance from people who are able to run this code . Kindly reply below

AbdulBariKhan avatar Aug 02 '19 21:08 AbdulBariKhan

@mallorbc the original paper mentions on Section 8.3 that they used 8~17 mins of footage for their target videos. Additionally they point out:

We follow the progressive learning schedule from pix2pixHD and learn to synthesize at 512 × 256 at the first (global) stage, and then upsample to 1024 × 512 at the second (local) stage... We trained the global stage for 5 epochs, the local stage for 30epochs

I have no idea how or if those resolution stages are automatically implemented in this repo, or how to set it manually. Any one care to share some insight? @seaweiqing , @yanx27 ?

I am currently training the GAN on 13 mins target video. Using a n1-standard-4 instance in GC (4 vCPUs, 15 GB memory) witth Nvidia K80 GPU, it took me 9 hours to get through the 1 epoch. I see some slight convergence in the loss but far from over. Guess Ill post an update in 50 hours or so...

slothkong avatar Mar 26 '20 12:03 slothkong

@mallorbc the original paper mentions on Section 8.3 that they used 8~17 mins of footage for their target videos. Additionally they point out:

We follow the progressive learning schedule from pix2pixHD and learn to synthesize at 512 × 256 at the first (global) stage, and then upsample to 1024 × 512 at the second (local) stage... We trained the global stage for 5 epochs, the local stage for 30epochs

I have no idea how or if those resolution stages are automatically implemented in this repo, or how to set it manually. Any one care to share some insight? @seaweiqing , @yanx27 ?

I am currently training the GAN on 13 mins target video. Using a n1-standard-4 instance in GC (4 vCPUs, 15 GB memory) witth Nvidia K80 GPU, it took me 9 hours to get through the 1 epoch. I see some slight convergence in the loss but far from over. Guess Ill post an update in 50 hours or so...

Can you please Provide a sample target video or this project used target video!! It will be of great help

shashu421 avatar Oct 20 '20 07:10 shashu421