Feature add DyPE support (experimental)
https://github.com/guyyariv/DyPE/tree/master
Flux only for now, I don't have enough VRAM to test it at very high resolutions (seems to be working at 1536x1536 resolution, but it should be tested at up to 4096x4096 to be completely sure it's working as intended)
Use env vraiables to enable it:
FLUX_ROPE=DY_YARN,DY_NTK,YARN, orNTK(any other value will use standard RoPE)FLUX_DYPE_BASE_RESOLUTION(defaults to1024which should be best for base Flux, ~~maybe use512for Chroma? (untested yet)~~768seems to work for Chroma, for some reason512didn't perform well at all in my testing, maybe use1024too)
Example:
.\build\bin\sd.exe --diffusion-model ..\ComfyUI\models\unet\Flux\dev\flux1-dev-Q3_k.gguf --t5xxl ..\ComfyUI\models\clip\t5\t5xxl_q8_0.gguf --clip_l ..\ComfyUI\models\clip\clip_l\clip_l.safetensors --vae ..\ComfyUI\models\vae\flux\ae.safetensors -p "a lovely cat holding a sign says 'Flux cpp'" --cfg-scale 1 --sampling-method euler -W 1536 -H 1536 --vae-tiling --vae-tile-size 64
(base resolution 1024)
| default RoPE | DY_YARN | DY_NTK | YARN | NTK |
|---|---|---|---|---|
Test with Flux Schnell q3_k at 1792x1792 (biggest I could acheive using shared video memory without crashing), 6 steps
| default | DY_YARN |
|---|---|
(doesn't look very lovely, but it makes a better use of the available image area I guess?)
Test with Flux Schnell q3_k at 1792x1792 (biggest I could acheive using shared video memory without crashing), 6 steps
How much video memory do you have?
How much video memory do you have?
16GB VRAM + 16GB shared memory. But I no longer think the crash is related to OOM issues. Rocm backend just crashes at high resolution (#948). ~~Maybe I should try again with Vulkan~~ Of course Vulkan won't work either, because buffer size limit
Just wanted you to try a smaller model, just to add a little more headroom, ~but my ggufs keep crashing on loading, even without the last commit :see_no_evil:~ I am stupid, I just forgot --clip-on-cpu .
eg: https://huggingface.co/Green-Sky/flux.1-lite-8B-GGUF/blob/main/lora-experiments/hyper-flux.1-lite-8B-8step-q5_k.gguf
@Green-Sky I get the same crash at over 1792x1792, even with a tiny model like https://huggingface.co/Green-Sky/flux-mini-GGUF/blob/main/flux-mini-q4_k.gguf
I can generate 2048x1024 without problems(?), but when I try 3072x1024 it runs but always returns a black image (???).
Here is 2048x1024 without any rope manipulation:
and here with YARN (defaults):
Wait, I forgot about --diffusion-fa. I can run 2048x2048 just fine by enabling it.
Well "fine", but still slow, with about 7GB vram to spare still
4096x4096 yields stable-diffusion.cpp/ggml/src/ggml-cuda/cpy.cu:258: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
Shnell q4_k 6steps, 2048x2048, with fa enabled
| default | DY_YARN |
|---|---|
Neither are looking good, but the one with dype at least has a cat in it?
Not sure if it's because of flash attention or if there's a bug somewhere, But I can't get any good results at high resolution, either with or without dype. Since I'm using previews, I can see that the first step generally looks okay-ish, but it gets darker and less detailed at every subsequent step.
High resolution results look the same kind of broken as when using CFG with Flux (blurry, overly contrasted and so on), But i double checked that CFG is not enabled.
I’m not sure whether generating ultra-high-resolution images would cause problems for models that weren’t specifically trained for that purpose — for example, internal NaNs. Previously, I tried using relatively large images as context inputs, which ended up producing black images, while using lower-resolution inputs worked fine.
It looks like it's inducing some (slight) distorsions on non-square resolutions, maybe the "base resolution" should be made aware of the targeted aspect ratio...
Not sure if it's because of flash attention or if there's a bug somewhere, But I can't get any good results at high resolution, either with or without dype. Since I'm using previews, I can see that the first step generally looks okay-ish, but it gets darker and less detailed at every subsequent step.
High resolution results look the same kind of broken as when using CFG with Flux (blurry, overly contrasted and so on), But i double checked that CFG is not enabled.
This doesn't seem to happen with Vulkan btw, ROCm only. @Green-Sky Which backend did you use for your 2048x1024 gens?
This doesn't seem to happen with Vulkan btw, ROCm only. @Green-Sky Which backend did you use for your 2048x1024 gens?
CUDA on my rtx 2070 (8gig). The model I linked looked fine, and I think slightly better without rope scaling at that resolution. edit: though it is a single example, but I think the vertical stripes are stronger with rope scaling (YaRN).
CUDA on my rtx 2070 (8gig)
Ok, so it seem like it's a ROCm specific issue. For context, generating at 2048x1024 with Flux on my ROCm build spits out somethig like this, regardless of the RoPE modifications:
Vulkan works, but it's very slow, here's a preview of the 2048x1536 that it's currently generating at 158.96s/it with dy-yarn enabled:
I got a driver timeout when I tried 2048x2048 on Vulkan.
It will now by default pick a base resolution with a similar aspect ratio and a pixel count close to FLUX_DYPE_BASE_RESOLUTION ² pixels, unless you specify the exact desired dimensions of the base resolution with FLUX_DYPE_BASE_RESOLUTION={Width}x{Height}
This seems to fix the distorsion issue completely, here's an example with a 1024x2048 using dy_yarn:
| No dype | Dype 1024x1024 (old) | dype 724x1448 (1024 auto) | bonus: 1448x724 (oops) |
|---|---|---|---|