three-gpu-pathtracer
three-gpu-pathtracer copied to clipboard
Add ability to report noise on each pass, allows halting on a noise threshold
Is your feature request related to a problem? Please describe.
It would be nice to be able to run a render until a specific noise threshold is reached. Such as you want noise to be under 0.1% or similar. This is to the settings in V-Ray and others where you set a noise threshold and the iterative rendering halts at that point.
https://evermotion.org/tutorials/show/12467/v-ray-settings-explained-by-jamie-cardoso
Describe the solution you'd like
It is quite easy to implement various halt strategies as long as we can report on each render iteration what the noise floor is currently.
I wonder if one can just compare the previous pixel value with the new pixel value after each pass and calculate what fraction of its RGB values were changed and then sum this up across the whole image.
Values that change very little are likely (I think) close to their final value, where as those who are varying significant are not.
Something along this lines seem relatively straight forward to implement. Maybe there is a clear standard somewhere for this type of noise calculation?
This stackexchange post seems to indicate that Blender does something similar to what you're suggesting but I'm trying to understand exactly how it would work per pixel.
Presumably we compare after the new pixel is averaged into the image - so pixels like the background would have zero difference between pixels after the first sample (assuming no blurring) while most pixels would be changing more dramatically especially at the beginning of a render. Though I guess the most we'd expect a pixel to change per frame is 1.0 / num_samples
.
I'd think we'd want to keep track of the change over multiple frames to ensure it isn't oscillating or resolved one sample that happened to be the same as the existing fragment. We'd need also need a setting to render a minimum number of samples.
To start I think this can be a separate helper class like NoiseDetector
that provides a noise delta between two images, accumulates the results to a single pixel, and reads it back. Or we could read the whole buffer of noise detection values back and sum them in a worker. Not sure which would be better.
Eventually something like this per-pixel convergence threshold could be used to adaptively sample certain parts of the scene - ignoring the background or shiny surfaces that resolve quickly by masking them with a stencil buffer or second target and focusing samples on the noisy bits of the scene.