leejet
leejet
> it is rather slow, q8 is the fastest i guess Currently, it only supports running on the CPU. The CPU performance on Colab is not very strong, which results...
> @czkoko you can use that model?? I've been trying some civitai model and converting it, but it didn't work like at #8 @juniofaathir Most of the SD 1.x models...
> Are you choosing a specific "recipe"? This is determined by the characteristics of the ggml library, quantization can only be for the weight of the full connection layer, and...
> The project works well on Android so maybe @leejet wants to update the supported platform list. Glad to hear that. I'll update the documentation later.
By the way, I've made a small optimization to make inference faster. I've tested it and it provides a `~10%` speed improvement. Feel free to pull the latest code and...
> @leejet do I need to `make` again? Yes, you need to make again
I've created a new benchmark category in the discussion forum and posted some benchmark information. You can also share your benchmark information there if you'd like. https://github.com/leejet/stable-diffusion.cpp/discussions/categories/benchmark