InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: model cache logs negative VRAM requested

Open keturn opened this issue 7 months ago • 0 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

RTX 3060

GPU VRAM

12 GB

Version number

5.9

What happened

When generating Flux images, I frequently see messages like this in the log:

[21:15:07,670]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '924bda16-8c73-4aed-a06c-6216705962ea:transformer' (Flux) onto cuda device in 1.20s. Total model size: 8573.12MB, VRAM: 6444.62MB (75.2%)        
[21:15:38,492]::[InvokeAI]::WARNING --> Loading 0.0 MB into VRAM, but only -283.4375 MB were requested. This is the minimum set of weights in VRAM required to run the model.                                                                          
[21:15:38,494]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'c75a604c-a146-44be-a294-36a1842c3f7e:vae' (AutoEncoder) onto cuda device in 0.14s. Total model size: 159.87MB, VRAM: 0.00MB (0.0%)   

Two things about that message are weird:

  • reports a negative number of megabytes were requested?
  • loaded zero megabytes?

What you expected to happen

not sure what to expect from the model cache's logging

keturn avatar Mar 21 '25 21:03 keturn