Difference between n_samples and n_iter?
greetings, would anyone care to explain what is the difference between n_samples and n_iter? they both make n output images, it's a bit confusing.
Thank you,
--n_samples N_SAMPLES
how many samples to produce for each given prompt. A.k.a batch size
This means how many images will be generated.
--n_iter N_ITER sample this often
number of denoising iterations for each image.
thank you Riccardo, n_iter generates an image for each iteration, is that normal?
n_iter generates n_samples of images for each iteration
Thank you for your input Beaver,
What happens in the background for n_iter? does it re-run the exact same parameters, same seed, in hope for a variation? What is the difference for each n_iter?
The same question could be asked for n_samples, if this is not too much to ask.
Note: maybe someone could point out where to look in the code base?
Thank you,
--n_samples A.k.a batch size is literally a batch size, increasing it you will try to fit more and more data to your GPU for a single run, increasing n_iter on the other hand just increases number of batches (each of size --n_samples) being processed sequentially.
Seed is different for each generated image by default, you can use fixed seed by passing --fixed_code parameter to the txt2img.py script (seed will be fixed for different iterations the number of which is controlled by the n_iter parameter, images in the batch will have different seeds, so if you want to generate n images using a single seed set --n_samples to 1 and n_iter to n).
Thank you for your explanation Kostya.
Using --fixed_code, if the seed is the same for each images, what other parameters are being randomized, if any?
--n_samples A.k.a batch size is literally a batch size, increasing it you will try to fit more and more data to your GPU for a single run, increasing n_iter on the other hand just increases number of batches (each of size --n_samples) being processed sequentially.
Does this mean that higher --n_samples can potentially produce better result at the expense of GPU memory?
No, it can't. I think here you confuse training phase (when changing batch size actually can drastically affect resulting model quality) with inference phase (when changing batch size only affects speed and memory consumption, whereas the generation result is determined by the fixed weights of the trained model + some randomness in case of stochastic sampler usage).
--n_samples A.k.a batch sizeis literally a batch size, increasing it you will try to fit more and more data to your GPU for a single run, increasingn_iteron the other hand just increases number of batches (each of size--n_samples) being processed sequentially. Seed is different for each generated image by default, you can use fixed seed by passing--fixed_codeparameter to thetxt2img.pyscript (seed will be fixed for different iterations the number of which is controlled by then_iterparameter, images in the batch will have different seeds, so if you want to generatenimages using a single seed set--n_samplesto1andn_iterton).
How can we know the seed for each images in a batch. I notice that is not a simple +1, +2