MuseTalk
MuseTalk copied to clipboard
MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Traceback (most recent call last): File "E:\MuseTalk\.glut\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "E:\MuseTalk\.glut\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__ return await self.app(scope, receive, send) File...
1. https://huggingface.co/spaces/TMElyralab/MuseTalk 2. what time need to generate a 10second video?
问一下一台机器上有两个3060 12G显卡 GPU0, GPU1,咱们数字人推理的时候怎么让他都用上
Hello, thank you very much for this amazing contribution. I was wondering if it's possible to use multi-GPU for inference. I have 8xA100, but when I use it for inference,...
多卡训练超时问题
用2张A800卡跑训练,出现卡住不动的问题,等待一段时间报错,请问这是什么原因 
by using audio 1(the one target avatar speaking) to create avatar then using audio 2(different people) lipsync result seems to be different?? is it limited to use audio same as...
when source video is people talking,normarlly, and get Manually adjust range -x~y, now what bbox shift value should i use?? or how to get the right value? any suggestion?
如原视频为30FPS或者60FPS 是先将原视频变成25FPS然后再生成吗 还是在执行的时候加上--fps 30(对应视频真实数值)来处理效果更好