[bug]: AssertionError when generating with FLUX
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
RX 6800 XT
GPU VRAM
16GB
Version number
5.0.0
Browser
LibreWolf 130.0
Python dependencies
{
"accelerate": "0.30.1",
"compel": "2.0.2",
"cuda": null,
"diffusers": "0.27.2",
"numpy": "1.26.4",
"opencv": "4.9.0.80",
"onnx": "1.15.0",
"pillow": "10.4.0",
"python": "3.11.9",
"torch": "2.2.2+rocm5.6",
"torchvision": "0.17.2+rocm5.6",
"transformers": "4.41.1",
"xformers": null
}
What happened
On a fresh install of InvokeAI 5.0.0 on Arch Linux with ROCm, image generation using FLUX Dev (Quantized) fails with AssertionError:
Generate images with a browser-based interface
[2024-09-28 20:53:08,605]::[InvokeAI]::INFO --> Patchmatch initialized
[2024-09-28 20:53:09,210]::[InvokeAI]::INFO --> Using torch device: AMD Radeon RX 6800 XT
[2024-09-28 20:53:09,362]::[InvokeAI]::INFO --> cuDNN version: 2020000
[2024-09-28 20:53:09,382]::[uvicorn.error]::INFO --> Started server process [8075]
[2024-09-28 20:53:09,382]::[uvicorn.error]::INFO --> Waiting for application startup.
[2024-09-28 20:53:09,382]::[InvokeAI]::INFO --> InvokeAI version 5.0.0
[2024-09-28 20:53:09,382]::[InvokeAI]::INFO --> Root directory = /home/noah/applications/invokeai
[2024-09-28 20:53:09,383]::[InvokeAI]::INFO --> Initializing database at /home/noah/applications/invokeai/databases/invokeai.db
[2024-09-28 20:53:09,656]::[uvicorn.error]::INFO --> Application startup complete.
[2024-09-28 20:53:09,657]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2024-09-28 20:53:13,555]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P8wCSPB HTTP/1.1" 200
[2024-09-28 20:53:13,562]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P8wCSPM&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:13,564]::[uvicorn.error]::INFO --> ('127.0.0.1', 46884) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=PYL4dP2vfJsLhKZxAAAA" [accepted]
[2024-09-28 20:53:13,565]::[uvicorn.error]::INFO --> connection open
[2024-09-28 20:53:13,565]::[uvicorn.access]::INFO --> 127.0.0.1:46872 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P8wCSPM.0&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:13,569]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P8wCSPV&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:13,576]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P8wCSPc&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:13,622]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2024-09-28 20:53:13,637]::[uvicorn.access]::INFO --> 127.0.0.1:46858 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P8wCSQZ&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:13,638]::[uvicorn.access]::INFO --> 127.0.0.1:46872 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P8wCSQZ.0&sid=PYL4dP2vfJsLhKZxAAAA HTTP/1.1" 200
[2024-09-28 20:53:21,271]::[uvicorn.access]::INFO --> 127.0.0.1:35344 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2024-09-28 20:53:21,291]::[uvicorn.access]::INFO --> 127.0.0.1:35344 - "GET /api/v1/queue/default/current HTTP/1.1" 200
[2024-09-28 20:53:21,298]::[uvicorn.access]::INFO --> 127.0.0.1:35360 - "GET /api/v1/queue/default/counts_by_destination?destination=canvas HTTP/1.1" 200
[2024-09-28 20:53:21,300]::[uvicorn.access]::INFO --> 127.0.0.1:35366 - "GET /api/v1/queue/default/list HTTP/1.1" 200
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[2024-09-28 20:53:21,410]::[uvicorn.access]::INFO --> 127.0.0.1:35344 - "GET /api/v1/queue/default/current HTTP/1.1" 200
[2024-09-28 20:53:28,178]::[InvokeAI]::ERROR --> Error while invoking session d8325f52-e93b-4eed-a7b8-fe3318b4a0d0, invocation a2783a88-4a1e-4b2c-a7d0-57bbc15b272a (flux_text_encoder):
[2024-09-28 20:53:28,178]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/invocations/baseinvocation.py", line 290, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/invocations/flux_text_encoder.py", line 45, in invoke
t5_embeddings = self._t5_encode(context)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/invocations/flux_text_encoder.py", line 60, in _t5_encode
with (
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/load_base.py", line 67, in __enter__
self._locker.lock()
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/model_locker.py", line 45, in lock
self._cache.move_model_to_device(self._cache_entry, self._cache.execution_device)
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 308, in move_model_to_device
raise e
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 303, in move_model_to_device
cache_entry.model.load_state_dict(new_dict, assign=True)
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2139, in load_state_dict
load(self, state_dict)
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
[Previous line repeated 4 more times]
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2121, in load
module._load_from_state_dict(
File "/home/noah/applications/invokeai/.venv/lib/python3.11/site-packages/invokeai/backend/quantization/bnb_llm_int8.py", line 60, in _load_from_state_dict
assert weight_format == 0
^^^^^^^^^^^^^^^^^^
AssertionError
[2024-09-28 20:53:28,241]::[uvicorn.access]::INFO --> 127.0.0.1:49946 - "GET /api/v1/queue/default/current HTTP/1.1" 200
[2024-09-28 20:53:28,543]::[InvokeAI]::INFO --> Graph stats: d8325f52-e93b-4eed-a7b8-fe3318b4a0d0
Node Calls Seconds VRAM Used
flux_model_loader 1 0.011s 0.000G
flux_text_encoder 1 6.863s 4.805G
TOTAL GRAPH EXECUTION TIME: 6.874s
TOTAL GRAPH WALL TIME: 6.875s
RAM used by InvokeAI process: 5.59G (+4.695G)
RAM used to load models: 4.56G
RAM cache statistics:
Model cache hits: 2
Model cache misses: 2
Models cached: 1
Models cleared from cache: 1
Cache high water mark: 4.56/4.00G
What you expected to happen
Image generates successfully.
How to reproduce the problem
- Start the app
- Download FLUX Dev (Quantized) from the model manager
- Type in a prompt on the canvas
- Click invoke
Additional context
I'm running InvokeAI in an Arch Linux systemd-nspawn container on top of NixOS, and SDXL models generate without issue.
Discord username
No response
Maybe related to this #7064 I also can use SDXL without any issues
I saw this problem on Mac with InvokeAI 5.3.1, which is installed using Stability Matrix.
use guidance 4
After a while, I tried to install v5.5.0 and experienced the same issue described in #7064. I decided to reinstall InvokeAI using the launcher released last month. Generating with FLUX Dev (Quantized) now fails with AttributeError: 'NoneType' object has no attribute 'cget_col_row_stats'. Closing in favour of #6962.