MuseTalk Calim to RealTime but did anyone managed ti make it realTIme ????
Hi,
Am a trying to make the model [main batch] to run in real-tile in order to have a discussion with an avatar. [i pass the audio]
the only way i could think of because the authors claim, but to not say how this can be used in real time is to split chunks . but tis does not wor even on A100 withc chunks of 1 sec for example and websocket server .
have any one managed to see if the claim real time is indeed true or that is an eror by the authors ?
thank you
I have the same question. How is the RTF less than 1 possible? Is the solution to send chunks to 2 different GPUs to make them work in parallel?
Yes, I am able to run this, in real-time, 20fps, only problem at moment I have, on silence frame mouth is not closing much, beside that, I am able to run that.
Hi rizwanishag
How did tou managed to make real-time?
- Splitted i out on chunks ?
- Used a cluster of GPU ?
- used video as a driving source or an image ?
Could you share you insights?
Thanks in advanced
hi @rizwanishaq Interesting to know that you were able to run it in realtime. what is your GPU/CPU combination that you are using to get this result?
Thanks