articulated-animation icon indicating copy to clipboard operation
articulated-animation copied to clipboard

Can we train the model on a single video and can we add our own driving video

Open vinay345 opened this issue 4 years ago • 12 comments
trafficstars

vinay345 avatar May 01 '21 15:05 vinay345

Why you need to train on a single video?

AliaksandrSiarohin avatar May 01 '21 15:05 AliaksandrSiarohin

Yes, but the results will be satisfactory. I'm guessing that you're looking to do some kind of quick, supervised motion transfer and get around training the model as it takes a long time (I don't blame you, it's resource heavy). It's best that you create a dataset that you can train with the specific thing you want, and infer on that. In a scenario like this, it's possible you can create a synthetic dataset (for example, 3D animations of what you want where you can control the scene) then train on that.

@AliaksandrSiarohin I think the reason why is that it might be hard to build in the wild datasets for something you want. For example, if you wanted to train a model on people doing back flips, it would be hard to find sufficient data to train on due to the different camera angles. Datasets like Taichi are easier due to stationary cameras.

To answer the second question, yes.

ExponentialML avatar May 01 '21 23:05 ExponentialML

When i add my own driving video and the source image it doesnt work,what could be the reason Second thing can the driving video be anything other than the datasets videos that you have used

vinay345 avatar May 02 '21 11:05 vinay345

I can't say, show an example.

AliaksandrSiarohin avatar May 02 '21 11:05 AliaksandrSiarohin

When I use my own driving video and the original image, the faces in the generated video are different from those in the original image. I want to ask, what is causing this?The effect of generating video is not good. Thanks.

Adorablepet avatar May 06 '21 03:05 Adorablepet

Send an example.

AliaksandrSiarohin avatar May 06 '21 11:05 AliaksandrSiarohin

@AliaksandrSiarohin test_demo.zip run command, as follows:

python demo.py --config config/ted384.yaml --driving_video driving_video.mp4 --source_image source_image.png --checkpoint checkpoints/ted384.pth

Thanks,

Adorablepet avatar May 07 '21 09:05 Adorablepet

Your video is not cropped. You should crop it to make it square around the person. Plus you can try to use ted-youtube config and checkpoint.

AliaksandrSiarohin avatar May 07 '21 11:05 AliaksandrSiarohin

Your video is not cropped. You should crop it to make it square around the person. Plus you can try to use ted-youtube config and checkpoint.

I run command with ffmpeg:

ffmpeg -i driving_video.mp4 -vf scale=384:384,setdar=1:1 driving_video_crop.mp4

But result.mp4 is not so good. test_demo_crop.zip

Is there a problem with my cropping method?Could you give a reference code? Thanks again.

Adorablepet avatar May 08 '21 01:05 Adorablepet

Thank for your great work! I'm going through the same thing,Is there something wrong with my data Desktop.zip

zhaoloulou avatar Jun 03 '21 07:06 zhaoloulou

Hello, I would like to ask whether the model you provided is the best model and whether the training time and data are sufficient

zhaoloulou avatar Jun 03 '21 07:06 zhaoloulou

Image and video crops should include upper part of the legs, see examples in Readme.

AliaksandrSiarohin avatar Jun 03 '21 09:06 AliaksandrSiarohin