InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

Open PsypmP opened this issue 9 months ago • 0 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

4080 mobile

GPU VRAM

12

Version number

no matter

Browser

standalone varsion!!!

Python dependencies

Starting up... Started Invoke process with PID: 6272 [2025-03-09 08:48:07,964]::[InvokeAI]::INFO --> cuDNN version: 90100 [2025-03-09 08:48:10,789]::[InvokeAI]::INFO --> Patchmatch initialized [2025-03-09 08:48:11,784]::[InvokeAI]::INFO --> InvokeAI version 5.8.0a1 [2025-03-09 08:48:11,784]::[InvokeAI]::INFO --> Root directory = F:!NeuroNet\InvokeAI [2025-03-09 08:48:11,785]::[InvokeAI]::INFO --> Initializing database at F:!NeuroNet\InvokeAI\databases\invokeai.db [2025-03-09 08:48:12,898]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 6047.93 MB. Heuristics applied: [1]. [2025-03-09 08:48:12,909]::[InvokeAI]::INFO --> Pruned 2 finished queue items [2025-03-09 08:48:12,928]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090/ (Press CTRL+C to quit) [2025-03-09 08:48:48,042]::[InvokeAI]::INFO --> Executing queue item 4, session adfe1366-ac35-4373-8fd6-483162e3f273 [2025-03-09 08:48:48,218]::[InvokeAI]::ERROR --> Error while invoking session adfe1366-ac35-4373-8fd6-483162e3f273, invocation 1e8fd204-8017-41be-aa66-ccb2f99d1bcb (flux_text_encoder): Error while deserializing header: MetadataIncompleteBuffer [2025-03-09 08:48:48,218]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 303, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 60, in invoke t5_embeddings = self._t5_encode(context) ^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 74, in _t5_encode t5_encoder_info = context.models.load(self.t5_encoder.text_encoder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 397, in load return self._services.model_manager.load.load_model(model, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 58, in load_model cache_record = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 79, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 151, in _load_model state_dict = load_file(state_dict_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\safetensors\torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

[2025-03-09 08:48:48,239]::[InvokeAI]::INFO --> Graph stats: adfe1366-ac35-4373-8fd6-483162e3f273 Node Calls Seconds VRAM Used flux_model_loader 1 0.012s 0.000G flux_text_encoder 1 0.150s 0.000G TOTAL GRAPH EXECUTION TIME: 0.162s TOTAL GRAPH WALL TIME: 0.162s RAM used by InvokeAI process: 0.87G (+0.003G) RAM used to load models: 0.00G RAM cache statistics: Model cache hits: 0 Model cache misses: 1 Models cached: 0 Models cleared from cache: 0 Cache high water mark: 0.00/0.00G

[2025-03-09 08:49:00,695]::[InvokeAI]::INFO --> Executing queue item 5, session 04258e9f-4225-401b-9b27-782bdfb4d8c9 [2025-03-09 08:49:00,752]::[InvokeAI]::ERROR --> Error while invoking session 04258e9f-4225-401b-9b27-782bdfb4d8c9, invocation 2b0bb7f9-5339-4042-9872-6e867ad1d7f7 (flux_text_encoder): Error while deserializing header: MetadataIncompleteBuffer [2025-03-09 08:49:00,752]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 303, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 60, in invoke t5_embeddings = self._t5_encode(context) ^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 74, in _t5_encode t5_encoder_info = context.models.load(self.t5_encoder.text_encoder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 397, in load return self._services.model_manager.load.load_model(model, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 58, in load_model cache_record = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 79, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 151, in _load_model state_dict = load_file(state_dict_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\safetensors\torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

[2025-03-09 08:49:00,763]::[InvokeAI]::INFO --> Graph stats: 04258e9f-4225-401b-9b27-782bdfb4d8c9 Node Calls Seconds VRAM Used flux_model_loader 1 0.000s 0.000G flux_text_encoder 1 0.054s 0.000G TOTAL GRAPH EXECUTION TIME: 0.054s TOTAL GRAPH WALL TIME: 0.054s RAM used by InvokeAI process: 0.87G (+0.000G) RAM used to load models: 0.00G RAM cache statistics: Model cache hits: 0 Model cache misses: 1 Models cached: 0 Models cleared from cache: 0 Cache high water mark: 0.00/0.00G

[2025-03-09 08:50:47,787]::[InvokeAI]::INFO --> Executing queue item 6, session 1241bca7-c07a-4715-bb2a-fa1aa499f174 [2025-03-09 08:50:47,853]::[InvokeAI]::ERROR --> Error while invoking session 1241bca7-c07a-4715-bb2a-fa1aa499f174, invocation 60a0c1c9-7ac1-4666-b1cb-2a300116e213 (flux_text_encoder): Error while deserializing header: MetadataIncompleteBuffer [2025-03-09 08:50:47,853]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 303, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 60, in invoke t5_embeddings = self._t5_encode(context) ^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 74, in _t5_encode t5_encoder_info = context.models.load(self.t5_encoder.text_encoder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 397, in load return self._services.model_manager.load.load_model(model, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 58, in load_model cache_record = self._load_and_cache(model_config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 79, in _load_and_cache loaded_model = self._load_model(config, submodel_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 151, in _load_model state_dict = load_file(state_dict_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:!NeuroNet\InvokeAI.venv\Lib\site-packages\safetensors\torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

[2025-03-09 08:50:47,867]::[InvokeAI]::INFO --> Graph stats: 1241bca7-c07a-4715-bb2a-fa1aa499f174 Node Calls Seconds VRAM Used flux_model_loader 1 0.000s 0.000G flux_text_encoder 1 0.061s 0.000G TOTAL GRAPH EXECUTION TIME: 0.061s TOTAL GRAPH WALL TIME: 0.061s RAM used by InvokeAI process: 0.87G (+0.000G) RAM used to load models: 0.00G RAM cache statistics: Model cache hits: 0 Model cache misses: 1 Models cached: 0 Models cleared from cache: 0 Cache high water mark: 0.00/0.00G

We'll activate the virtual environment for the install at F:!NeuroNet\InvokeAI.

What happened

I have cleaned the entire system from InvokeAi. I've installed different versions. I've uninstalled everything that can be uninstalled. I've updated everything that can be updated. I've reinstalled the models 4 times. It doesn't fucking work. This shit don't work.

What you expected to happen

This shit should work.

How to reproduce the problem

Magic

Additional context

No response

Discord username

No response

PsypmP avatar Mar 09 '25 03:03 PsypmP