InvokeAI
InvokeAI copied to clipboard
[bug]: Flux Schell quantized on AMD Ryzen 7 7700 Linux
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
No response
GPU VRAM
APU
Version number
latest
Browser
firefox
Python dependencies
No response
What happened
I installed the model via invokeui but then it seems it is only available for NVIDIA
invokeai-rocm-1 | [2024-10-08 13:14:54,226]::[InvokeAI]::ERROR --> Error while invoking session 90a38dfc-82bc-43b6-9342-cb677bbc1199, invocation 7f4b45ed-60ec-44e7-9a95-eae7e5af22ad (flux_text_encoder): Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
invokeai-rocm-1 | [2024-10-08 13:14:54,226]::[InvokeAI]::ERROR --> Traceback (most recent call last):
invokeai-rocm-1 | File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
invokeai-rocm-1 | output = invocation.invoke_internal(context=context, services=self._services)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 290, in invoke_internal
invokeai-rocm-1 | output = self.invoke(context)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
invokeai-rocm-1 | return func(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/invokeai/invokeai/app/invocations/flux_text_encoder.py", line 50, in invoke
invokeai-rocm-1 | t5_embeddings = self._t5_encode(context)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/invokeai/invokeai/app/invocations/flux_text_encoder.py", line 74, in _t5_encode
invokeai-rocm-1 | prompt_embeds = t5_encoder(prompt)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/invokeai/invokeai/backend/flux/modules/conditioner.py", line 28, in forward
invokeai-rocm-1 | outputs = self.hf_module(
invokeai-rocm-1 | ^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 1972, in forward
invokeai-rocm-1 | encoder_outputs = self.encoder(
invokeai-rocm-1 | ^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 1107, in forward
invokeai-rocm-1 | layer_outputs = layer_module(
invokeai-rocm-1 | ^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 687, in forward
invokeai-rocm-1 | self_attention_outputs = self.layer[0](
invokeai-rocm-1 | ^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 594, in forward
invokeai-rocm-1 | attention_output = self.SelfAttention(
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py", line 513, in forward
invokeai-rocm-1 | query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
invokeai-rocm-1 | return self._call_impl(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
invokeai-rocm-1 | return forward_call(*args, **kwargs)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 817, in forward
invokeai-rocm-1 | out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/bitsandbytes/autograd/_functions.py", line 556, in matmul
invokeai-rocm-1 | return MatMul8bitLt.apply(A, B, out, bias, state)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/autograd/function.py", line 574, in apply
invokeai-rocm-1 | return super().apply(*args, **kwargs) # type: ignore[misc]
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/bitsandbytes/autograd/_functions.py", line 291, in forward
invokeai-rocm-1 | using_igemmlt = supports_igemmlt(A.device) and not state.force_no_igemmlt
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/bitsandbytes/autograd/_functions.py", line 220, in supports_igemmlt
invokeai-rocm-1 | if torch.cuda.get_device_capability(device=device) < (7, 5):
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/cuda/__init__.py", line 451, in get_device_capability
invokeai-rocm-1 | prop = get_device_properties(device)
invokeai-rocm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/cuda/__init__.py", line 465, in get_device_properties
invokeai-rocm-1 | _lazy_init() # will define _get_device_properties
invokeai-rocm-1 | ^^^^^^^^^^^^
invokeai-rocm-1 | File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init
invokeai-rocm-1 | torch._C._cuda_init()
invokeai-rocm-1 | RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
invokeai-rocm-1 |
invokeai-rocm-1 | [2024-10-08 13:14:54,241]::[InvokeAI]::INFO --> Graph stats: 90a38dfc-82bc-43b6-9342-cb677bbc1199
invokeai-rocm-1 | Node Calls Seconds VRAM Used
invokeai-rocm-1 | flux_model_loader 1 0.001s 0.000G
invokeai-rocm-1 | flux_text_encoder 1 1.859s 0.000G
invokeai-rocm-1 | TOTAL GRAPH EXECUTION TIME: 1.860s
invokeai-rocm-1 | TOTAL GRAPH WALL TIME: 1.861s
invokeai-rocm-1 | RAM used by InvokeAI process: 14.95G (-0.120G)
invokeai-rocm-1 | RAM used to load models: 4.56G
invokeai-rocm-1 | RAM cache statistics:
invokeai-rocm-1 | Model cache hits: 2
invokeai-rocm-1 | Model cache misses: 2
invokeai-rocm-1 | Models cached: 7
invokeai-rocm-1 | Models cleared from cache: 1
invokeai-rocm-1 | Cache high water mark: 7.46/7.50G
What you expected to happen
it should work
How to reproduce the problem
No response
Additional context
No response
Discord username
No response