InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Hyper and turbo flux loras fail to load and patch

Open skunkworxdark opened this issue 3 months ago • 0 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Install method

Invoke's Launcher

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 5090

GPU VRAM

32GB

Version number

V6.3

Browser

chrome

System Information

The Hyper flux and turbo flux loras fail to load and patch in invoke. These LoRAs provide a way of generating on flux in a reduce number of step (8 or 16 steps).

https://huggingface.co/ByteDance/Hyper-SD/resolve/main/Hyper-FLUX.1-dev-8steps-lora.safetensors?download=true https://huggingface.co/ByteDance/Hyper-SD/resolve/main/Hyper-FLUX.1-dev-16steps-lora.safetensors?download=true https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha

What happened

They fail to load because they contain a norm_out.linear (final_layer.adaLN_modulation.1) layer that fails to load and patch. In the current version you get this error

  File "D:\ai\invokeai\.venv\Lib\site-packages\invokeai\backend\patches\lora_conversions\flux_diffusers_lora_conversion_utils.py", line 228, in lora_layers_from_flux_diffusers_grouped_state_dict
    assert len(grouped_state_dict) == 0
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

What you expected to happen

The LoRAs to load and patch correctly and result in fewer steps needed to generate.

How to reproduce the problem

install the lora and apply to a generation.

Additional context

I did some investigation and attempted to update the lora patching and was partially successful but wasn't happy with how it worked out as I didn't feel like it fitted in correctly with the existing lora patching. I have been unsuccessful so far in making it work and fit in with the existing code structures.

Discord username

skunkworxdark

skunkworxdark avatar Aug 11 '25 11:08 skunkworxdark