FaceFormer
FaceFormer copied to clipboard
How to FaceFormer output arkit blendshape coefficient instead of vertices?
The work is remarkable. How to FaceFormer output arkit blendshape coefficient instead of vertices? Very Thanks.
If faceformer supports arkit-blendshape-coefficient output, it can support other 3D templates.
@youngstu Did u get a way to do this? I got some problem
The work is remarkable. How to FaceFormer output arkit blendshape coefficient instead of vertices? Very Thanks.
Hi, for the current model, the output representation is vertex positions. The training datasets we used do not contain the blendshape coefficients ground truth, so the model was not able to learn such information during training. I guess some other techniques based on inverse graphics may be required to convert the vertex positions into 51 blendshape coefficients.
The work is remarkable. How to FaceFormer output arkit blendshape coefficient instead of vertices? Very Thanks.
Hi, for the current model, the output representation is vertex positions. The training datasets we used do not contain the blendshape coefficients ground truth, so the model was not able to learn such information during training. I guess some other techniques based on inverse graphics may be required to convert the vertex positions into 51 blendshape coefficients.
I want to know if i have the blendshape coefficients groud truth, why in the render video, the lip animation is not evident
@youngstu Did u get a way to do this? I got some problem
what's problem did you encounter, and did you solve it
@zhaiyuan0217 not yet, did u able to get 51 blendshapes ?
@zhaiyuan0217 not yet, did u able to get 51 blendshapes ?
we have blendshapes with our data, after traing the model with our data and blendshapes, the rendered video is not well, but traing the model with vertices, the rendered video is well.
@zhaiyuan0217 Hi, what do u mean training the model with vertices
i mean the output is 5023*3 vertices.
@zhaiyuan0217 not yet, did u able to get 51 blendshapes ?
we have blendshapes with our data, after traing the model with our data and blendshapes, the rendered video is not well, but traing the model with vertices, the rendered video is well.
i have same question
May I ask how you create your own dataset? Can you be specific? Thank you! @zhaiyuan0217 @xiaodongyichuan
@zhaiyuan0217 not yet, did u able to get 51 blendshapes ?
we have blendshapes with our data, after traing the model with our data and blendshapes, the rendered video is not well, but traing the model with vertices, the rendered video is well.
Did you solve this problem? I also trained with wav and blendshape data, and the final driving effect is that the mouth basically does not open
I was trying to train on DECA expressions but the mouth is not moving when rendering the result.