InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Crash using Flux Dev.1

Open DJP1973 opened this issue 11 months ago • 12 comments

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3060

GPU VRAM

12

Version number

5.5.0

Browser

Edge

Python dependencies

Local System accelerate

1.0.1

compel

2.0.2

cuda

12.4

diffusers

0.31.0

numpy

1.26.3

opencv

4.9.0.80

onnx

1.16.1

pillow

10.2.0

python

3.11.11

torch

2.4.1+cu124

torchvision

0.19.1+cu124

transformers

4.46.3

xformers

Not Installed

What happened

click generate and it exits with: [2025-01-06 18:09:22,644]::[InvokeAI]::INFO --> Cleaned database (freed 0.01MB) [2025-01-06 18:09:22,644]::[InvokeAI]::INFO --> Invoke running on http://0.0.0.0:9090/ (Press CTRL+C to quit) [2025-01-06 18:11:10,050]::[InvokeAI]::INFO --> Executing queue item 91, session 1d1a923c-b431-4d28-b25e-0f1bc38690fa C:\InvokeAI.venv\Lib\site-packages\bitsandbytes\autograd_functions.py:316: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization") C:\InvokeAI.venv\Lib\site-packages\transformers\models\clip\modeling_clip.py:540: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.) attn_output = torch.nn.functional.scaled_dot_product_attention( Process exited with code: 3221225477

What you expected to happen

on generation it loads and then exits within about 20 seconds

How to reproduce the problem

generate anything with the Flux DEV non quant mode

Additional context

TY I am very new to this but trying hard to learn

Discord username

No response

DJP1973 avatar Jan 06 '25 23:01 DJP1973

same error.

gigend avatar Jan 09 '25 07:01 gigend

I'm getting similar with Flux Schnell on a 2070 Super with Invoke installed viaStability Matrix:

\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:540: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.) attn_output = torch.nn.functional.scaled_dot_product_attention(

KudintG avatar Jan 09 '25 18:01 KudintG

Same here.... started happening a few days ago. No idea what caused it. So annoying! Anyone know a fix please?

tensorflow73 avatar Jan 11 '25 10:01 tensorflow73

I get this:

[2025-01-12 03:08:45,916]::[ModelInstallService]::INFO --> Model download complete: black-forest-labs/FLUX.1-dev [2025-01-12 03:08:45,920]::[ModelInstallService]::INFO --> Model install started: black-forest-labs/FLUX.1-dev [2025-01-12 03:08:45,924]::[ModelInstallService]::ERROR --> Model install error: black-forest-labs/FLUX.1-dev InvalidModelConfigException: Unknown base model for /invokeai/models/tmpinstall_fvh2hdgc/FLUX.1-dev

TiddlyWiddly avatar Jan 12 '25 03:01 TiddlyWiddly

@gigend @KudintG @tensorflow73 Just to confirm, are you all seeing Process exited with code: 3221225477? Or just the same warnings that lead up to it? And, can you all confirm what version of Invoke you are seeing this on?

@TiddlyWiddly your error is unrelated to the main issue here. Please open a new bug report.

RyanJDick avatar Jan 13 '25 02:01 RyanJDick

I've found that the base Flux schnell model with the standard T5 works fine, but other flux models crash it out with the error pointing to the flash attention build issue. Pony, sd1.x and sdxl models all function normally.

tensorflow73 avatar Jan 13 '25 02:01 tensorflow73

I've found that the base Flux schnell model with the standard T5 works fine, but other flux models crash it out with the error pointing to the flash attention build issue. Pony, sd1.x and sdxl models all function normally.

The warnings about flash attention are expected on Windows. The real error in the original bug report is Process exited with code: 3221225477. Are you seeing this same error? Or something else? And, what version of Invoke are you running?

RyanJDick avatar Jan 13 '25 14:01 RyanJDick

I am seeing the same error in 5.6.0:


Starting up...
Started Invoke process with PID: 24132
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
[2025-01-26 15:17:13,641]::[InvokeAI]::INFO --> Loading node pack clipInterrogator-invokeai-node
[2025-01-26 15:17:14,468]::[InvokeAI]::INFO --> Loaded 1 node packs from C:\Users\joe\invokeai2\nodes
[2025-01-26 15:17:17,605]::[InvokeAI]::INFO --> Patchmatch initialized
[2025-01-26 15:17:19,427]::[InvokeAI]::INFO --> Using torch device: CPU
[2025-01-26 15:17:20,909]::[InvokeAI]::INFO --> InvokeAI version 5.6.0
[2025-01-26 15:17:20,910]::[InvokeAI]::INFO --> Root directory = C:\Users\joe\invokeai2
[2025-01-26 15:17:20,915]::[InvokeAI]::INFO --> Initializing database at C:\Users\joe\invokeai2\databases\invokeai.db
[2025-01-26 15:17:20,943]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 5599.30 MB. Heuristics applied: [1].
[2025-01-26 15:17:21,008]::[InvokeAI]::INFO --> Pruned 1 finished queue items
[2025-01-26 15:17:21,382]::[InvokeAI]::INFO --> Cleaned database (freed 0.01MB)
[2025-01-26 15:17:21,382]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090/ (Press CTRL+C to quit)
[2025-01-26 15:17:21,406]::[InvokeAI]::INFO --> Executing queue item 6581, session b4735278-a291-444d-a7da-b1b25edcdafa
Process exited with code 3221225477

joepeters-1 avatar Jan 26 '25 20:01 joepeters-1

See the docs here for tips on tuning memory usage for your system: https://invoke-ai.github.io/InvokeAI/features/low-vram

In particular, this section addresses common causes of exit code 3221225477: https://invoke-ai.github.io/InvokeAI/features/low-vram/?h=low+vram#windows-page-file

RyanJDick avatar Jan 28 '25 17:01 RyanJDick

Same issue here.

I have a GeForce RTX 4070 TI and i'm trying to install the "FLUX.1 Kontext dev" model on Invoke v6.0.0rc3.

Log:

[2025-07-05 02:03:19,554]::[ModelInstallService]::INFO --> Queueing model install: black-forest-labs/FLUX.1-Kontext-dev::flux1-kontext-dev.safetensors (1 file)
[2025-07-05 02:03:19,555]::[InvokeAI]::INFO --> Started installation of black-forest-labs/FLUX.1-Kontext-dev::flux1-kontext-dev.safetensors
[2025-07-05 02:03:20,356]::[DownloadQueueService]::INFO --> File download started: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/resolve/main/flux1-kontext-dev.safetensors
[2025-07-05 02:07:03,038]::[DownloadQueueService]::INFO --> Download complete: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/resolve/main/flux1-kontext-dev.safetensors
[2025-07-05 02:07:03,038]::[ModelInstallService]::INFO --> Model download complete: black-forest-labs/FLUX.1-Kontext-dev::flux1-kontext-dev.safetensors
[2025-07-05 02:07:03,039]::[ModelInstallService]::INFO --> Model install started: black-forest-labs/FLUX.1-Kontext-dev::flux1-kontext-dev.safetensors

Hashing FLUX.1-Kontext-dev_flux1-kontext-dev.safetensors:   0%|          | 0/1 [00:00<?, ?file/s]
Hashing FLUX.1-Kontext-dev_flux1-kontext-dev.safetensors:   0%|          | 0/1 [00:00<?, ?file/s]
Hashing FLUX.1-Kontext-dev_flux1-kontext-dev.safetensors: 100%|##########| 1/1 [01:06<00:00, 66.12s/file]
Hashing FLUX.1-Kontext-dev_flux1-kontext-dev.safetensors: 100%|##########| 1/1 [01:06<00:00, 66.13s/file]
Process exited with code 3221225477

vegeziel avatar Jul 05 '25 00:07 vegeziel

It turned out my problem was related to low disk space. I've installed invoke on a secondary ssd with more than 80GB of free space but during the installation of the model it uses my primary disk (that only had 20GB free) for paging. The official invoke documentation helped me to find the solution.

https://invoke-ai.github.io/InvokeAI/features/low-vram/#windows-page-file

vegeziel avatar Jul 05 '25 14:07 vegeziel

Same problem here with v1.8.1, Windows 11, RTX 4090 (24gb), 64gb RAM, >1TB free disk space, "system managed" page size. hashing_algorithm: random has no effect. Using "in-place install" works for all but FLUX.1-Fill-dev_flux1-fill-dev.safetensors which still fails with Process exited with code 3221225477

ironcladlou avatar Nov 02 '25 11:11 ironcladlou