PPF_Noise_ComfyUI
PPF_Noise_ComfyUI copied to clipboard
The exponent isn't used by default...?
Am I reading this right? The amplitude
starts at 1.0, and is repeatedly multiplied by persistence
. The default persistence
is 1.0, which means amplitude
stays at 1.0 through all those multiplications. When noise is generated per-channel with
noise_value_r = noise(nx + X, ny + Y, nz + Z, p) * amplitude ** exponent
The operator precedence in Python raises amplitude
to the power of exponent
first, and only then is that result multiplied by the noise()
call. Because by default, amplitude
stays at 1.0, and 1.0 to any non-NaN power is 1.0, the exponent is simply ignored unless persistence
has been set to something other than 1.0 .
I am guessing this is not the intended behavior. Without exponentiation involved, I would think this isn't actually different from Perlin noise with Fractal Brownian Motion (the most common way I've seen Perlin noise), and I'd suggest changing the default persistence
so this does something more-or-less novel by default.
I'd also strongly suggest clarifying the range calls to noise()
can take, since there's no single standard across noise implementations, and exponentiation would be very different if it can apply to negative and positive numbers, rather than just positive ones. It looks like your noise() can return values between -1 and 1, which is I believe as Ken Perlin originally wrote it, but that seems a little curious to use for RGBA channels. In the context of this generator, does it make sense to have a negative value for red, green, or blue? Should it clamp values, should the noise be scaled and shifted to the 0-1 range, or should the noise be left alone? Typically, with 8 octaves of Perlin noise, the results are much more frequent near the center of the range (for noise between -1 and 1, that would be around 0) than they are at the most extreme values. Clamping such values would produce 0 or a low value most often by far, which could be optimal in this case because high outliers would be rare. Scaling and shifting the noise to the 0 to 1 range would put most results near 0.5 . Perlin and more recently Schlick have published good "bias" and "gain" functions that can be used to emphasize or de-emphasize extreme values. Barron published a convenient micro-paper, https://arxiv.org/abs/2010.09714 , for a way to control the emphasis even more using two parameters; this applies to values in the 0 to 1 range and produces results in that range as well. If you use any of these parameterized emphasis controls, you can expose the parameters to users as well! That is something I feel isn't present enough when noise is provided by libraries, and is very powerful when someone needs really tight control.
Thanks for thinking about ways to improve AI art! It's great to see how code has become so influential to this new field. It would be interesting if someone could figure out just how and why using Perlin (power fractal) noise is more effective than whatever Comfy uses without this. I'm guessing that since the distribution of Perlin noise is quite different from, say, white noise, if Comfy used white noise it would have more random outliers relative to any form of multi-octave Perlin (or Simplex) noise.