Zhizhou Zhong
Zhizhou Zhong
@kanghua309 你好,请问使用的realtime-inference吗?如果追求更快的速度您可以尝试对模型进行[`torch.compile`](https://docs.pytorch.org/tutorials/intermediate/torch_compile_tutorial.html)
Hi @hunglsxx, You can try the following commands: ```bash pip uninstall onnxruntime-gpu pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ ``` ---
Thank you for your interest! You can check some details about the audio driven control in the supplementary materials of our paper, where we have included the relevant experimental results....
> I scanned the PDF paper and i cant find audio driven control of the face just like in sadtalker or hedra wherein we just input the image and audio...
Due to some limitations, we are sorry that we are unable to provide this model. But you can follow the description in the appendix.C to train an audio driven model...
> I've downloaded the latest version and when it's finished rendering via the "Animate" button Sometimes everything looks fine, but sometimes there are very obvious EDGES around the face that...
@MissAlang Thank you for your attention. We currently only support Nvidia GPUs and Apple Silicon GPUs.
Thank you very much for your efforts in making LivePortrait a better project! We will follow up on your thread next week. @aihacker111
@Tinaisok 感谢你提供解决方案,我们为此类问题加上一个标签供其他用户查阅。 Thank you for providing a solution. We have added a tag to this type of issue for reference by other users.
@on1you @yingzhige118 可以尝试一下: 1. 把[这个参数](https://github.com/TMElyralab/MuseTalk/blob/main/scripts/inference.py#L271)设置为`raw`。 2. 修改[这一行代码](https://github.com/TMElyralab/MuseTalk/blob/main/musetalk/utils/face_parsing/__init__.py#L107)为:` parsing[np.isin(parsing, [1, 10, 11, 12, 13])] = 255`