stable_diffusion.openvino icon indicating copy to clipboard operation
stable_diffusion.openvino copied to clipboard

Batch processing promts

Open Karthik-Dulam opened this issue 3 years ago • 2 comments

What is the recommended way to batch process the prompts? To avoid loading and unloading the model into memory for every prompt. This is very wasteful and time consuming, especially on devices with less than 10GB RAM, since it uses SWAP and writing to a disk is terribly slow.

Ideally one would want to process multiple prompts and generate images in sequence without freeing the model from memory. I have seen that this is possible in some Colab Notebooks, they were using GPUs though.

Karthik-Dulam avatar Sep 18 '22 13:09 Karthik-Dulam

You may want to check this out: https://github.com/bes-dev/stable_diffusion.openvino/pull/58/commits/5d8b9ddfef8245b6c829f9f75b3b67025e7c8f5b

ClashSAN avatar Sep 19 '22 19:09 ClashSAN

I have a solution in my fork that keeps the model in memory between prompts: https://github.com/Drake53/stable_diffusion.openvino/commit/1862800dc4fc220dd81f1f22d6b2bf1ad36eb9b2

Drake53 avatar Sep 25 '22 16:09 Drake53