InvokeAI
InvokeAI copied to clipboard
[bug]: ControlNet `zoe` processor does not free VRAM
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Linux
GPU
cuda
VRAM
No response
What version did you experience this issue on?
Main
What happened?
The zoe processor does not free VRAM when it finishes processing. Each time it runs, it uses more VRAM:
(Yellow line is VRAM, the 3 steps are running the ZoeImageProcessor node once)
It appears to be the only control net processor that uses any VRAM - I wonder if the other processors would have the same issue if they used GPU.
I tried a couple things to unload the model, but it doesn't work:
torch.cuda.empty_cache()
torch.clear_autocast_cache()
gc.collect()
I suppose the issue is the repeated loading of the model with from_pretrained, which does not check if the model is already loaded.
Screenshots
No response
Additional context
No response
Contact Details
No response