Thin-Plate-Spline-Motion-Model
Thin-Plate-Spline-Motion-Model copied to clipboard
How to properly use the ted dataset
Thanks for the repo and colab. I've gotten the demo to work with vox and a portrait. I'm trying to get ted to work.
What do you think is the optimal parameters of the driving footage? For ted, vox, etc.
Edit1: I used the ted checkpoint and config and made a little bit of progress with limbs showing, but still pretty messy. Perhaps just need to match dimensions of the source image to the footage perfectly in frame 1? Cropping source image to match driving video and removing the background from the source image made a tiny bit of progress, but still bad.
Edit 2: I see your comparison and tip to use taichi for full body: https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model/issues/38 Will try taichi.
Would you say for ted, the driving video should be cropped at chest level and above like in your examples? Could you include the single source in the assets folder for the demo gifs you made (instead of the row of gifs)? Does the background need to be close to a solid color? I grabbed ted footage that had more objects in the background, but the person stayed stationary in the center. My source images also had a lot of background noise. In general the output just has the center pushed out where the ted talker is, and it wobbles around a bit but there's no limb or facial recognition.
When using ted, I left the vox config the same because I'm unclear of how to modify that. Would you say I need to use a ted-specific config and go thru the parameters? I'll start looking now just in case.
Yeah. It would be great to know how it will work.
@GuruVirus Have you had any progress with this? So far my results are pretty bad even with aligned/cropped data.
The issue is very relevant