Sihan Chen
Sihan Chen
@foocker great work! 2.4s also includes face enhancing (gfpgan) time, right?
> > @foocker great work! 2.4s also includes face enhancing (gfpgan) time, right? > > no, the result is ok, so i removed it. Cool!
Hi @diabolo98 , this error lies in the facexlib dependencies, not in inference.py. Please in your `/usr/local/lib/python3.8/dist-packages/facexlib/alignment/__init__.py` , try edit the line ``` model.load_state_dict(torch.load(model_path)['state_dict'], strict=True) ``` to ``` model.load_state_dict(torch.load(model_path, map_location=device)['state_dict'],...
Hi @diabolo98 , I just noticed that you are doing the experiment on colab. I have not ever tested it on colab with **cpu only**. colab's cpus are limited. I...
int8 optimization itself does not depend on intel extension for pytorch here so it will not raise the above errors. But again, I've not tested it yet in colab environment....
Colab is truly limited in CPU because it tends to let you use GPU or TPU to do computing intensive work. My optimization is tested on Xeon Sapphire Rapids (check...
No problem, your feedbacks are welcome :)
@SuperMaximus1984 , please check https://github.com/Spycsh/xtalker/issues/3
Thanks for interest. Currently XTalker is able to drive image to speak with about 10x speed compared to SadTalker. I have not integrated it into any real-time streaming system. However,...
Thanks for info:) I do not have access to a Mac currently and the optimization here in my repo should only be used for intel xeon cpu. But good to...