xlc-github
xlc-github
hi,thank your for your code,I have test your optimized_txt2img.py,the inference time is indeed about 24-26 sec per image.does the inference time can decrease to 14-16sec if the sd model has...
do we need to update the model file when run optimizedSD/optimize_txt2img.py?
hi,how to set the params of process_pix2pix which need less than 12G VRAM
并发问题求助
目前一张4090卡,我同时打开多个网页后(5个),不同的sessionid,请求同一个数字人,我想实现的是这个5个页面中数字人同时说话的时候画面不卡顿,音频不卡顿,能做到么,如何提升并发量,发现打开3个还勉强可以,超过3个就发现画面卡顿,音频不正常。优化哪里可以提升并发量,我的显卡资源应该是够的,服务器cpu资源也够,但是发现支持的并发量好小