stable-diffusion
stable-diffusion copied to clipboard
use more ram for more speed?
is it possible to use more ram for more speed while still maintaining lower vram usage from main sd for some people with more vram like 3070 users for example?
Hi, you can use the --small_batch flag. Currently, the model sends the images to the UNet model one by one irrespective of the --n_smaples value; using this flag will change it to two images simultaneously. This will increase the VRAM usage but reduce the inference time.
@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?
@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?
use --n_iter
@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?
use --n_iter
--n_iter still waits for the entire iteration to complete before generating the set number of images. I was wondering if we can start saving and displaying images while the iteration continues to run.
A hackjob way would be to set the number of images to 1 and then iterations to how many ever images I want .. but I was wondering if we'd be able to do it the other way around.
@basujindal Off topic but would it be possible to start generating images and save them to disk one by one rather than waiting for the entire process to be done before we start seeing images?
use --n_iter
--n_iter still waits for the entire iteration to complete before generating the set number of images. I was wondering if we can start saving and displaying images while the iteration continues to run.
A hackjob way would be to set the number of images to 1 and then iterations to how many ever images I want .. but I was wondering if we'd be able to do it the other way around.
I set the samples to 1 and use the iter, it workds great
Hi, I have added an optional argument --turbo. This is most effective when using a small batch size. It will reduce the inference time to 25 sec per image for txt2img and 15sec per image for img2img (excluding the time to load the model once) at the expense of around 1GB VRAM. Using GUI will load the model only once, so you can experiment with the prompts while generating only a few images. Cheers!
Thanks!