You-Cun

Results 12 comments of You-Cun

We actually do not use the a1111 style prompts when testing the style LoRAs though we planned to use it at the beginning. As such, the parameters are optimized without...

We have established the codebase, and are now managing to adjust the default training options to simplify usage. The training pipeline is expected to be released in 07.01 roughly.

Hello, it seems like the code in line 84 has some problems. Please check this.

It may comes from CPU out of memory (OOM) error. Please check if the CPU memory is above 20GB.

We are now preparing the journal version of FaceChain-ImagineID, and will release the code after that.

内存泄漏源于diffusers中pipeline在CPU和GPU切换时的内存碎片,FACT版本由于包含inpainting和text-to-image的多个pipeline,为节省显存故而使用上述切换,该问题须使用jemalloc解决。

感谢您的建议。针对加载多个sd模型的问题,该设计可以避免后续用户切换模型时重新加载的时间,但的确导致内存和初始启动时间的增加,如有需要,可以添加用户手动切换并加载模型的版本。针对sd webui预加载模型复用的问题,当前版本的facechain由于涉及到对sd模型inference流程的修改,故无法直接支持sd webui模型的复用,须重构webui中推理步骤的底层代码,如有兴趣,可以考虑加入facechain开发群,召集开发者合作开发该功能。

我也遇到相同的问题,看了下中间结果,是deca得到的render、albedo和normal都是0,具体原因暂时不清楚

Please run "pip install modelscope -U", then run "pip install datasets==2.16".

Currently we do not support training and inference on multiple GPUs in Gradio interface. It is suggested to modify run_inference.py for multiple GPU for script execution.