Denoising Is Broken, at least on flux
Original image
Denoising 0.79
Denoising 0.8
I noticed that flux takes far far higher denoising to make any changes for hiresifx but this appears to be related to an issue with denoising for flux. The image is barely changed if below 0.80 denoising.
Would be REALLY nice to have this fixed, as IMG2IMG with flux is essentially pointless with this issue.
"The denoise value inicates the percentage of final steps that are done. For example, for a 20 steps sampling a 0.75 denoise means that the sampling starts at step 5 and ends at step 20. "
Original post: https://www.reddit.com/r/StableDiffusion/comments/1htnkqg/flux_in_comfy_why_is_denoise_such_a_strong_lever/
Had the same problem as you, raised the number of steps from 4 to 6 and it solved it.
I know this is issue report in forge, but I experience the same issue in ComfyUI.
I think it is worth to put up some of my finding after seeing @Zbuling's comment.
I've been playing around with this CivitAI model that can generate really impressive images in as few as 4 steps. However, I've noticed something really interesting with the "denoising strength" setting when I try to do a img2img workflow that I created.
When I set the denoise value to 0.80 or lower, the generated image looks incredibly similar to my input image. But the moment I set denoise value up to 0.81, the image transforms completely. Like what you reported.
I dig into the source code of the Basic Scheduler in ComfyUI
After looking at the code, it seems the behavior is directly linked to how the denoise value interacts with the scheduler's logic. Here's what I've figured out:
- First, I'm not an AI engineer, but I'm just trying to explain what I understand from the source code.
- When the denoise strength is less than
1.0, the scheduler actually increases the total number of steps it uses to create the noise schedule. This is not the step we see being process, but the noise that gets added to the original latent. In our cases, instead of random noise, it is an image. - It does this by dividing my chosen number of sampling steps (in this case,
4) by the denoise value. This result determines the length of the full noise schedule. - Since we're dealing with positive numbers,
int()casting acts likeMath.floor. Here is the table of my use case of 4 steps:
| Denoise | Steps to start adding new noise |
|---|---|
0.79 |
int(4 / 0.79) = 5 |
0.80 |
int(4 / 0.80) = 5 |
0.81 |
int(4 / 0.81) = 4 |
0.82 |
int(4 / 0.82) = 4 |
What this means is that with lower denoise values, I'm essentially only using the very tail end of the noise schedule.
When the denoise value increases and crosses a critical threshold around 0.81, there's a sharp transition in behavior. Specifically, the total number of steps calculated by the scheduler drops from 5 to 4:
| Step | int(4 / 0.80) = 5 |
int(4 / 0.81) = 4 |
|---|---|---|
| Step 1 | input image | input image |
| Step 2 | input image | input image |
| Step 3 | input image | input image |
| (End) Step 4 | input image | input image + new noise |
| (Never execute) Step 5 | (skipped) input image + noise | (skipped) step 4 result + new noise |
Now, here's where that sharp transition around 0.81 comes in. When the denoise value increases and crosses that threshold, the calculated total number of steps drops back down to my specified total step 4. This means the noise scheduler is adding noise in step 4, which drastically add random noise to my original image.
So, it's not a bug in the model or the scheduler, but rather a direct consequence of how the denoise value influences the length of the noise schedule and which part of that schedule is used for sampling.
However, correct me if I'm wrong about how this works.