TimoBolkart
TimoBolkart
> can you please elaborate how to do it sir Please run `print([placeholder for op in tf.get_default_graph().get_operations() if op.type=='Placeholder' for placeholder in op.values()])` [after this line](https://github.com/TimoBolkart/voca/blob/7bfd21ccdefcf284ba784f55eea57f9d5ab11d00/utils/audio_handler.py#L101)
Running your code it seems that the namespace "deepspeech" is missing. This might be due to some changes in your code, not sure. Does removing the deepspeech work for you,...
This seems to be some problem with file access. The video export first exports a temporary video file for the rendering and then combines it with the audio and outputs...
True, the audio is input, the mesh data output / target for the training. It is not just about copying the data but you need to bring them into the...
For the eye blink, we directly take the FLAME expression blendshapes (weighted by the eye blink parameters) and add them to the VOCA output mesh. This is possible as VOCA...
Can you please give more details, why you cannot download the pre-trained model?
Please try again to register. If it still does not work, please send an email to [email protected]
Either you take the output mesh in FLAME topology and extract the 3D landmarks from this mesh using the provided barycentric landmark embedding (you could do that by calling the...
Can you please provide some more details how you obtained this? It looks like the 3D mesh was not aligned with the image, when transferring the texture to the mesh....
Did you try again to download the model, maybe with a different browser?