Andrew Ryzhkov
Andrew Ryzhkov
OpenVINO doesn't have FP16 at all for CPU. So, even if a model is in FP16 it will be calculated in FP32 anyway. Frankly, I see no point in having...
Can you load the FP32 model and use FP16 for calculations? No problem.
> > Can you load the FP32 model and use FP16 for calculations? No problem. > > In 2023.0 version yes. But having FP16 also reduce the model size significantly....
Just make default="output.jpg"
@baselqt Convert it to OpenVINO IR format following the tutorial here: https://docs.openvino.ai/2023.1/notebooks/225-stable-diffusion-text-to-image-with-output.html And modify with the following code: ```python pipe1 = StableDiffusionPipeline.from_single_file('path/model.safetensors') pipe = StableDiffusionPipeline.from_pretrained("prompthero/openjourney", unet=pipe1.unet).to("cpu") ```
@homevk15 Sure, here's my code: https://github.com/RedAndr/SD_PyTorch2ONNX/blob/main/Convert_Civitai_OpenVINO.py Let me know if it doesn't work for you.
BTW, the files in the https://huggingface.co/bes-dev/stable-diffusion-v1-4-openvino/tree/main directory were updated two months ago. So, it shouldn't be it.
There is a tutorial on how to convert the model to the ONNX format and then to the IRs: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/225-stable-diffusion-text-to-image/225-stable-diffusion-text-to-image.ipynb It works fine except it lacks the vae encoder, but...
Actually, I was wrong about the half-precision. These models could be converted too. Just need to add torch_dtype=torch.float32 in the pipe options.
Frankly, I modified my version too much to find what I did at the beginning. However, it is quite simple, just run the code and you will see where the...