ComfyUI
ComfyUI copied to clipboard
Using OpenVINO to accelerate on Intel CPUs
I am using CPU for stable diffusion XL ,and I find it is very slow. I learned that by converting the model into OpenVINO IR can we accelerate the progress.And OpenVINO pipeline is supported by diffusers. Could you please add the function to use OpenVINO IR .bin files as the model like .ckpt and safetensors ?
Anyway, if we can add support for onnx model, then it's OK.
Also, if there are any other ways to accelerate inference on Intel CPU with low memory, please let me know. Thank you!🙏
Up