banding artifacts in hires fix
Using hires makes vertical/horizontal banding artifects. I have tested with different hires upscalers some are less noticable, but it is visible depending on the image. Also it is more prominent if we use higher multiplier in hires. My example (on her face, click on full res to see):
You can read more about this issue here: https://www.reddit.com/r/StableDiffusion/comments/1eyrljx/what_am_i_doing_that_is_causing_the_banding/
I'm not sure about the Tiled Diffusion possible solution though.
- Flux does not really work with "Latent" upscale
- you need to give what model you are using. some quants do not work well with img2img
- I just tested fp8 and common esr hrfix and it works in 100% cases without artifacts
I find that NF4 has a tendency to so some banding even without High res fix.
- Flux does not really work with "Latent" upscale
- you need to give what model you are using. some quants do not work well with img2img
- I just tested fp8 and common esr hrfix and it works in 100% cases without artifacts
I have tested fp8 version with esr high res fix and it was also visible. It also depends on the image and it is not always prominent.
I get the same banding upscaling 1.75 in i2i .35 denoise. So it's Flux itself giving me the banding. I'm running with Flux1-Dev-Q8_0.gguf. But I've been getting it with the other versions. I'm just expecting that's part of Flux for now.
damn are there really no fixes to this?? it's driving me insane
damn are there really no fixes to this?? it's driving me insane
I may have bad news for you. I think it's built into Flux itself. Try generating a 2000+ pixel image with Flux itself. No hires fix, no upscale... banding. That's what my testing has revealed.
Flux does not really work with "Latent" upscale
you need to give what model you are using. some quants do not work well with img2img
I just tested fp8 and common esr hrfix and it works in 100% cases without artifacts
Any chance to look into it and solve this issue?:
Related topics: https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1712 https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1821
Dang, I'm getting some great results but the banding has been plaguing me since Day 1 for about 50% of my images during upscale.
Using hires makes vertical/horizontal banding artifects. I have tested with different hires upscalers some are less noticable, but it is visible depending on the image. Also it is more prominent if we use higher multiplier in hires. My example (on her face, click on full res to see):
You can read more about this issue here: https://www.reddit.com/r/StableDiffusion/comments/1eyrljx/what_am_i_doing_that_is_causing_the_banding/
I'm not sure about the Tiled Diffusion possible solution though.
Hi @Korner83 see this image
Hi @Korner83 see this image
Any detailes about your results?
@Korner83 thanks for your reaction :-). I know that many people can be discouraged, but I use a scientific approach, and the results have value.
Here's how to fix it for free: Use tiled sampling with tile sizes less or equal to 1024.
Here's how to fix it for free: Use tiled sampling with tile sizes less or equal to 1024.
Tiled sampling was invented to workaround resolution limitation problem, banding starts with at resolutions when images are still good but they are ruined with banding issue. Tiled sampling limits you in upscaling options (algorithm, denoising strength 0.3-0.4) and this also limits details in the resulting image. For up to 5 megapixels and for some 8.3 megapixels (4K) it is better to avoid it.
I am totally shocked that the person who is trying to sell a solution isn't in favor of the free option! No one could have foreseen this.
- Who cares why it was invented if it solves this particular problem? (In actuality, it's a pretty similar problem.)
- It is atypical to use a super high denoise in a highres fix type workflow. Most of the time people don't want or need to effectively regenerate the whole image. Not sure who needs to use a ridiculously high denoise, but I don't and I'm guessing that applies to most other people as well.
You claim to have a solution, but you won't share it with anyone unless they pay you. Anyone can try what I suggested here and see if it works for them.
There is more complex course here https://ko-fi.com/s/2f2d90c749 and here https://www.patreon.com/jpaedu/shop/mastering-ai-image-generation-with-flux-810873 it is 44 slides presentation. I covered this issue and few more to get the full potential of flux.
This particular issue, I know how to deal with up to 4-5 megapixels and up to 4K in some images is still OK. It is uncovered in #1712 already.
scientific
in "scientific" you mean greedy? :)
This particular issue, I know how to deal with up to 4-5 megapixels and up to 4K in some images is still OK.
You claim to know how to deal with it. There is no way for us to verify that. Saying "trust me" is meaningless.
It is uncovered in https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1712 already.
Nothing was uncovered. You linked content that people would have to pay to see, we still have to trust that you're telling the truth. This may sound weird, but people on the internet don't always tell the truth. Especially when they're trying to extract money from the other party.
And no, "ChatGPT said my thing was great" isn't something we can rely on either.
scientific
in "scientific" you mean greedy? :)
Yes, give me all your money now! :)
No that way, I spent lot of time with exclusion methods for example, lets say (maybe) I have also good analytical skills. When I'm passionate, I have a lot of patience and willing to think about things and generally will to go deeper in the topics, I think that other people lose their temper sooner.
@Korner83 you don't have to buy anything.
If you find that you have made a presentation for colleagues as a training, and you find that on the Internet, people have been solving several problems for months that I satisfactorily solved or answered, I think. What would you do when you have already presentation available ? I can offer anything for $, it's not greedy. Just the place is not ideal. Many courses are for $ and this is not a collection of information from forums and YouTube.
@blepping I really like your answer, it makes sense to me.
Nothing was uncovered. You linked content that people would have to pay to see, we still have to trust that you're telling the truth. This may sound weird, but people on the internet don't always tell the truth. Especially when they're trying to extract money from the other party.
OK I will try to put clear summary to #1712
You claim to know how to deal with it. There is no way for us to verify that. Saying "trust me" is meaningless.
The queen example here, and similar 2 examples (eternal female, star wars) in #1712
And no, "ChatGPT said my thing was great" isn't something we can rely on either.
I let ChatGPT to make abstract from the course, I was surprised with the result, I'm looking to it like sort of objectivity - some other "person" wrote it. I can confidently claim this is the content.
@blepping I really like your answer, it makes sense to me.
I thought we had a bright future as internet rivals, now I'm almost disappointed!
OK I will try to put clear summary
So the answer is simply to use this Acorn Is Spinning checkpoint? https://civitai.com/models/673188?modelVersionId=757421
I'm willing to give it a try but I'd say I really don't feel like having to use one very specific checkpoint is good general solution and it definitely can't be considered a long term one.
If that is a solution, it looks like the person just hit on it randomly. There's nothing on the page about them deliberately trying to enable Flux generating at higher than normal resolution.
Anyway, thank you for sharing that.
Hi @blepping
Yes, that model. I just put summary to #1712 ;-)
few percent of checkpoints are that good, but it is not too many. IMO probably additional training did that, BFL maybe not released the best trained model by intention. More in #1712

