skysworder
skysworder
you can use SSML to soundwave instead of text to soundwave,and set the viseme type as "FacialExpression" in your SSML string. below is a valid SSML example to get blendshape...
> Following your guidance, I obtained the blendshape data, but how can I use this data to enable metahuman to implement lipsync? Hope to get some guidance. I've tried serval...
here's an example blueprint animgraph(in face_animBP of your metahuman),notice I use a viseme pose asset(face_visemes_lib_PoseAsset) with Oculus OVRlips naming-convention instead of use Azure viseme ID number.For blend Poses, You need...