InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: guidance_in.in_layer.weight error

Open radialmonster opened this issue 4 months ago • 7 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

rtx 3090

GPU VRAM

24gb

Version number

6.0.2

Browser

edge

Python dependencies

No response

What happened

new invoke install, just installed the flux starter models. using flux.1 schnell quantized. text to image. keep getting errors about guidance_in.in_layer.weight

Image

Image

[2025-07-13 14:20:09,648]::[InvokeAI]::INFO --> Executing queue item 7, session 2a7fdd98-6005-4890-8a56-ee100e87c6b3 [2025-07-13 14:20:12,911]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '821a8568-9076-45e8-9a8f-8358fc54b316:text_encoder_2' (T5EncoderModel) onto cuda device in 3.00s. Total model size: 4667.39MB, VRAM: 4667.39MB (100.0%) You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers [2025-07-13 14:20:13,103]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '821a8568-9076-45e8-9a8f-8358fc54b316:tokenizer_2' (T5TokenizerFast) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%) C:\ai\InvokeAI.venv\Lib\site-packages\bitsandbytes\autograd_functions.py:185: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization") [2025-07-13 14:20:14,232]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '6413f756-567c-4866-b2dc-26aac4c6a927:text_encoder' (CLIPTextModel) onto cuda device in 0.07s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%) [2025-07-13 14:20:14,289]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '6413f756-567c-4866-b2dc-26aac4c6a927:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%) [2025-07-13 14:20:14,541]::[InvokeAI]::ERROR --> Error while invoking session 2a7fdd98-6005-4890-8a56-ee100e87c6b3, invocation bca007b5-3c21-4573-8eb5-573af6bd6279 (flux_denoise): Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: "guidance_in.in_layer.bias", "guidance_in.in_layer.weight", "guidance_in.in_layer.weight.absmax", "guidance_in.in_layer.weight.quant_map", "guidance_in.in_layer.weight.quant_state.bitsandbytes__nf4", "guidance_in.out_layer.bias", "guidance_in.out_layer.weight", "guidance_in.out_layer.weight.absmax", "guidance_in.out_layer.weight.quant_map", "guidance_in.out_layer.weight.quant_state.bitsandbytes__nf4". [2025-07-13 14:20:14,541]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 130, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 241, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_denoise.py", line 164, in invoke latents = self._run_diffusion(context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_denoise.py", line 345, in _run_diffusion context.models.load(self.transformer.transformer).model_on_device() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 394, in load return self._services.model_manager.load.load_model(model, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 71, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model cache_record = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 306, in _load_model return self._load_from_singlefile(config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 330, in _load_from_singlefile model.load_state_dict(sd, assign=True) File "C:\ai\InvokeAI.venv\Lib\site-packages\torch\nn\modules\module.py", line 2593, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: "guidance_in.in_layer.bias", "guidance_in.in_layer.weight", "guidance_in.in_layer.weight.absmax", "guidance_in.in_layer.weight.quant_map", "guidance_in.in_layer.weight.quant_state.bitsandbytes__nf4", "guidance_in.out_layer.bias", "guidance_in.out_layer.weight", "guidance_in.out_layer.weight.absmax", "guidance_in.out_layer.weight.quant_map", "guidance_in.out_layer.weight.quant_state.bitsandbytes__nf4".

[2025-07-13 14:20:14,555]::[InvokeAI]::INFO --> Graph stats: 2a7fdd98-6005-4890-8a56-ee100e87c6b3 Node Calls Seconds VRAM Used flux_model_loader 1 0.014s 0.000G string 1 0.000s 0.000G flux_text_encoder 1 4.735s 5.035G collect 1 0.002s 5.031G integer 1 0.001s 5.031G flux_denoise 1 0.128s 5.036G TOTAL GRAPH EXECUTION TIME: 4.880s TOTAL GRAPH WALL TIME: 4.883s RAM used by InvokeAI process: 6.23G (+5.892G) RAM used to load models: 5.02G VRAM in use: 5.031G RAM cache statistics: Model cache hits: 4 Model cache misses: 5 Models cached: 4 Models cleared from cache: 0 Cache high water mark: 5.02/0.00G

C:\ai\InvokeAI.venv\Lib\site-packages\huggingface_hub\utils_deprecation.py:131: FutureWarning: 'get_token_permission' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '1.0'. Permissions are more complex than when get_token_permission was first introduced. OAuth and fine-grain tokens allows for more detailed permissions. If you need to know the permissions associated with a token, please use whoami and check the 'auth' key. warnings.warn(warning_message, FutureWarning) [2025-07-13 14:25:43,866]::[InvokeAI]::INFO --> Executing queue item 8, session 85f7e209-d291-4910-a488-9b833c5d673f [2025-07-13 14:25:43,957]::[InvokeAI]::ERROR --> Error while invoking session 85f7e209-d291-4910-a488-9b833c5d673f, invocation 0f39b916-21f0-4b08-82bf-40c45c49aece (flux_denoise): Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: "guidance_in.in_layer.bias", "guidance_in.in_layer.weight", "guidance_in.in_layer.weight.absmax", "guidance_in.in_layer.weight.quant_map", "guidance_in.in_layer.weight.quant_state.bitsandbytes__nf4", "guidance_in.out_layer.bias", "guidance_in.out_layer.weight", "guidance_in.out_layer.weight.absmax", "guidance_in.out_layer.weight.quant_map", "guidance_in.out_layer.weight.quant_state.bitsandbytes__nf4". [2025-07-13 14:25:43,957]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 130, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 241, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_denoise.py", line 164, in invoke latents = self._run_diffusion(context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_denoise.py", line 345, in _run_diffusion context.models.load(self.transformer.transformer).model_on_device() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 394, in load return self._services.model_manager.load.load_model(model, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 71, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model cache_record = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 306, in _load_model return self._load_from_singlefile(config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ai\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 330, in _load_from_singlefile model.load_state_dict(sd, assign=True) File "C:\ai\InvokeAI.venv\Lib\site-packages\torch\nn\modules\module.py", line 2593, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: "guidance_in.in_layer.bias", "guidance_in.in_layer.weight", "guidance_in.in_layer.weight.absmax", "guidance_in.in_layer.weight.quant_map", "guidance_in.in_layer.weight.quant_state.bitsandbytes__nf4", "guidance_in.out_layer.bias", "guidance_in.out_layer.weight", "guidance_in.out_layer.weight.absmax", "guidance_in.out_layer.weight.quant_map", "guidance_in.out_layer.weight.quant_state.bitsandbytes__nf4".

[2025-07-13 14:25:43,967]::[InvokeAI]::INFO --> Graph stats: 85f7e209-d291-4910-a488-9b833c5d673f Node Calls Seconds VRAM Used flux_model_loader 1 0.000s 5.031G string 1 0.001s 5.031G flux_text_encoder 1 0.000s 5.031G collect 1 0.001s 5.031G integer 1 0.000s 5.031G flux_denoise 1 0.086s 5.036G TOTAL GRAPH EXECUTION TIME: 0.088s TOTAL GRAPH WALL TIME: 0.088s RAM used by InvokeAI process: 6.23G (+0.005G) RAM used to load models: 0.00G VRAM in use: 5.031G RAM cache statistics: Model cache hits: 0 Model cache misses: 1 Models cached: 0 Models cleared from cache: 0 Cache high water mark: 0.00/0.00G

What you expected to happen

invoke images

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

radialmonster avatar Jul 13 '25 18:07 radialmonster

Also getting a similar error.

RTX 3090

Invoke AI 6.0.2

Flux Dev

Image

MelonSmasher avatar Jul 17 '25 19:07 MelonSmasher

Same here. also RTX 3090, invoke ai 6.0.2 and Flux dev.

Image

jdakillah avatar Jul 17 '25 20:07 jdakillah

Side note Flux Kontext Dev (Quantized) is working for me.

MelonSmasher avatar Jul 17 '25 21:07 MelonSmasher

This is a serious blocker. Basically, the default "starter pack" model just fails to work completely.

Ark-kun avatar Jul 20 '25 22:07 Ark-kun

Also running into this issue

RTX 4070 Super

Flux Dev

InvokeAI v6.2.0

Gh0st-drive avatar Jul 29 '25 09:07 Gh0st-drive

Same issue. RTX 4080 Super Flux Dev InvokeAI v6.2.0

QuantumGlitch-dev avatar Aug 03 '25 12:08 QuantumGlitch-dev

Ran into this issue as well. I pivoted to use the FLUX.1 dev model (non-quantized: ~24GB) and that seemed to do the trick for me.

domanchi avatar Aug 06 '25 18:08 domanchi