stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

use more ram for more speed?

Open TingTingin opened this issue 3 years ago • 7 comments

is it possible to use more ram for more speed while still maintaining lower vram usage from main sd for some people with more vram like 3070 users for example?

TingTingin avatar Aug 21 '22 03:08 TingTingin

Hi, you can use the --small_batch flag. Currently, the model sends the images to the UNet model one by one irrespective of the --n_smaples value; using this flag will change it to two images simultaneously. This will increase the VRAM usage but reduce the inference time.

basujindal avatar Aug 22 '22 07:08 basujindal

@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?

blessedcoolant avatar Aug 22 '22 15:08 blessedcoolant

@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?

use --n_iter

TingTingin avatar Aug 22 '22 15:08 TingTingin

@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?

use --n_iter

--n_iter still waits for the entire iteration to complete before generating the set number of images. I was wondering if we can start saving and displaying images while the iteration continues to run.

A hackjob way would be to set the number of images to 1 and then iterations to how many ever images I want .. but I was wondering if we'd be able to do it the other way around.

blessedcoolant avatar Aug 22 '22 15:08 blessedcoolant

@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?

use --n_iter

--n_iter still waits for the entire iteration to complete before generating the set number of images. I was wondering if we can start saving and displaying images while the iteration continues to run.

A hackjob way would be to set the number of images to 1 and then iterations to how many ever images I want .. but I was wondering if we'd be able to do it the other way around.

I set the samples to 1 and use the iter, it workds great

wborgo avatar Aug 24 '22 16:08 wborgo

Hi, I have added an optional argument --turbo. This is most effective when using a small batch size. It will reduce the inference time to 25 sec per image for txt2img and 15sec per image for img2img (excluding the time to load the model once) at the expense of around 1GB VRAM. Using GUI will load the model only once, so you can experiment with the prompts while generating only a few images. Cheers!

basujindal avatar Aug 26 '22 19:08 basujindal

Thanks!

TingTingin avatar Aug 26 '22 20:08 TingTingin