4D-Facial-Avatars icon indicating copy to clipboard operation
4D-Facial-Avatars copied to clipboard

How to get expression statistics?

Open soom1017 opened this issue 2 years ago • 5 comments

Thanks for previous support on making continuous video.

In the real_to_nerf.py code, it says I need "expressions.txt", "rigid.txt". Also in the json file of the person_1 dataset, there are "expressions" values that is already prepared.

How can I get these values from my own video or image sequence? I searched for face2face model code, and there's nothing but demo code using pix2pix model.

soom1017 avatar Apr 26 '22 09:04 soom1017

You need a face tracker, you can try some open source ones, eg [](https://github.com/philgras/video-head-tracker)

gafniguy avatar May 16 '22 11:05 gafniguy

You need a face tracker, you can try some open source ones, eg @gafniguy as this vht repo "https://github.com/philgras/video-head-tracker", it outputs totally different dimension of expression vector (which is 100d), how to align this with nerface requirement (expression is 76D vector)? Can I just change the video-head-tracker's output dimension to match your input requirement?

yangqing-yq avatar Dec 28 '22 14:12 yangqing-yq

@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?

yangqing-yq avatar Dec 30 '22 10:12 yangqing-yq

@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?

Sorry for that. As my team decided not to use nerf-like models, I have nothing proceeded.

soom1017 avatar Dec 30 '22 14:12 soom1017

@yangqing-yq yes, you can just change the 76 to the dimension of the FLAME expression vector. With the rigid pose you have to be a bit more careful, as FLAME has a neck parameter as well, make sure you take that into account when you save the R|T of the head)

gafniguy avatar Dec 30 '22 15:12 gafniguy