InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

feat(nodes): free gpu mem after invocation

Open psychedelicious opened this issue 2 years ago • 1 comments

fixes #3125

I'm not sure if this is exactly correct, but after doing this my VRAM usage returns to baseline (3 to 4gb with a model loaded) after generation, and after OOM error.

psychedelicious avatar May 03 '23 09:05 psychedelicious

This might not be necessary. The new model manager that I'm working on uses the context manager to free GPU memory when a model is no longer in context:

manager = ModelCache(max_loaded_models=4)
with manager.get_model('stabilityai/stable-diffusion-2') as model:
     do_something(model)

The model is loaded into GPU on entry into the context and unloaded on exit. Any model that has the to() method will work. Fast caching into CPU RAM is also supported.

I'm just writing the regression tests for this and will commit a PR soon.

lstein avatar May 03 '23 15:05 lstein