stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Feature Request]: Cache checkpoints in RAM before using them

Open RagingFlames opened this issue 2 years ago • 1 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

There is already a setting to cache a certain number of checkpoints in RAM. This feature would prefetch the next model to be used when, for example, doing an XY plot using multiple checkpoints. That way there is no downtime in loading a model from disk. So say for example you are doing an XY plot across 3 different checkpoints and we have enabled saving 3 checkpoints to RAM. The first checkpoint would be loaded normally, however once it's done loading and the image starts to get generated, the next checkpoint would also be loaded into RAM. Once the first image is done generating, we would normally load the next checkpoint, but it's already in RAM waiting to be used because we prefetched it. The same steps would apply for the third checkpoint, loading the model into RAM while processing the second image.

Proposed workflow

  1. Go to settings and enable saving checkpoints to RAM
  2. Enable prefetching models
  3. Go to txt to Img and setup a prompt to generate images across multiple models
  4. Press generate

Additional information

No response

RagingFlames avatar Mar 11 '23 03:03 RagingFlames

sounds cool but unfortunately the nature of python is synchronous, this would require a major overhaul of internals.

vladmandic avatar Mar 11 '23 12:03 vladmandic