4D-Facial-Avatars
4D-Facial-Avatars copied to clipboard
How to get expression statistics?
Thanks for previous support on making continuous video.
In the real_to_nerf.py code, it says I need "expressions.txt", "rigid.txt". Also in the json file of the person_1 dataset, there are "expressions" values that is already prepared.
How can I get these values from my own video or image sequence? I searched for face2face model code, and there's nothing but demo code using pix2pix model.
You need a face tracker, you can try some open source ones, eg [](https://github.com/philgras/video-head-tracker)
You need a face tracker, you can try some open source ones, eg @gafniguy as this vht repo "https://github.com/philgras/video-head-tracker", it outputs totally different dimension of expression vector (which is 100d), how to align this with nerface requirement (expression is 76D vector)? Can I just change the video-head-tracker's output dimension to match your input requirement?
@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?
@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?
Sorry for that. As my team decided not to use nerf-like models, I have nothing proceeded.
@yangqing-yq yes, you can just change the 76 to the dimension of the FLAME expression vector. With the rigid pose you have to be a bit more careful, as FLAME has a neck parameter as well, make sure you take that into account when you save the R|T of the head)