Audio-based lipsync?
First of all, thanks for this program. (especially since it has a Linux version unlike VSeeFace)
I believed I wasn't going to migrate to Linux without giving up on being a VTuber but this program is helping me consider giving Linux another shot. Second of all, I want to suggest a feature I usually rely on back when I was using VSeeFace. Is it possible to implement audio-based lip syncing? My mouth (on the VTuber) doesn't seem to move very much while talking and would help to have my microphone do the work.
Thanks!
Audio lipsync is probably one of the most requested features, and personally I would like add that as well, but unfortunately I have yet to find a module that works on web app.
most powerful model animation tool I have ever seen so far! sadly lipsync is a key feature. I am not an expert, but maybe this could help: https://github.com/SARIT42/lipsyncr if not, maybe you can still add this soon. Would be great, but no preasure.