all-is-cubes icon indicating copy to clipboard operation
all-is-cubes copied to clipboard

Path-tracing reference renderer

Open kpreid opened this issue 6 months ago • 2 comments

I’m not a big fan of path-tracing because it fundamentally contains noise and, in today’s real-time implementations, temporal artifacts. However, it is interesting to compare the results of path-traced global illumination to our interpolated block-scale global illumination, since the latter can be seen as a cached approximation of the former. Commit a01609e19cdaeeadef7477520791a1540882bd89 added a proof-of-concept implementation to the (CPU) raytracer. Further work is needed to make this more usable.

  • [ ] Reprojection of previous frames’ data to the latest camera projection.

    This will enable accumulation for lower noise, but is also necessary for usable interactive performance.

    • [x] Basic implementation: done in 9c2b258a10c99082990a47f66c958ce76d7f7bb5.
    • [x] Distinguish render layers (UI and world) so the UI is not reprojected using world camera movement. (Done in 9c2b258a10c99082990a47f66c958ce76d7f7bb5.)
      • [ ] Actually, we need to fully split the render layers into separate output textures, so that the UI doesn't occlude part of the reprojected world or get its pixels mixed in to to gap filling.
    • [ ] Implement filling in the gaps in reprojected frames in a more thorough and efficient way (e.g. jump flooding) than drawing large points of an approximate size.
    • [ ] Enable use of reprojection once the pieces are done well.
  • [ ] Accumulation of samples over multiple frames so at least stationary content can be low-noise.

  • [ ] Possibly, blending multiple samples from different points that are known to be on the same voxel surface, to reduce noise using our knowledge of the world structure.

  • [ ] For non-interactive uses, configurable sample count without any accumulation process, to get the brute-force answer as efficiently as possible.

kpreid avatar Jun 13 '25 01:06 kpreid

Commit 9c2b258a10c99082990a47f66c958ce76d7f7bb5 adds the core implementation of reprojection, but it needs significant further work so it is not yet enabled.

I also came to realize while working on it that the problem I solved is different than the conventional reprojection done for temporal accumulation: what I wrote is “forwards”, taking the previous frame data and drawing it as points in the new camera, but the normal thing to do is to go “backwards” from current frame depth to obtain the previous frame’s color. But that only works when your raytracing is fast enough to collect at least one sample per displayed frame, so without that, we have to take a more “lidar point cloud” approach to rendering. I can change that once I have implemented GPU ray tracing.

Or, we could take a hybrid approach: rasterize meshes to obtain depth information, then reproject backwards to use stale raytracing results as if they are world space light data! However, I don't think that’s worth doing in itself; it might be worth exploring in a future when we have GPU raytracing ready.

kpreid avatar Jul 19 '25 00:07 kpreid

Current status of work on this:

  • I have an unmerged implementation of gap-filling. However, it has very visible flaws:
    • Pixels from the UI spread and contaminate the view of the world. Therefore, before we can use it seriously, we need to either identify and exclude UI pixels, or create separate textures for processing the UI and world layers.
    • Despite the fact that it blends multiple pixels, single-pixel noise gets amplified into bright flashes. Therefore, we need de-noising before gap filling, or a cleverer blending strategy that spreads the blended pixels back closer to the edges of the gap.
  • I have an unmerged implementation of denoising based on averaging with neighbors. However, it is not very effective because there is so much noise that looking at immediate neighbors does not reliably catch the rare bright pixels.
    • It needs a wider area to be useful, and I think it therefore should be using the same kind of jump-flood-based averaging that the gap filler does; however, in order to do that, we need to give mip_ping support for processing multiple textures worth of channels so we can distinguish surfaces.
    • Stronger filtering will tend to blur the voxels on a surface. Ideally we blur light, but not surface color.
    • We should also accumulate from previous frames, and should consider doing that exclusively instead of spatial averaging.

Therefore, next actions:

  • Modify raytracer::Hit so that, when possible, it presents surface color separate from illumination (irradiance), rather than having already multiplied them. Thus, we can de-noise the illumination without causing blurriness of the surface.
    • Side benefit: accumulators that don’t care about illumination can skip that calculation by not asking for it.
    • Note that this contains an assumption that there is a deterministic surface and a noisy reflection/illumination. This assumption is violated if we e.g. take a Monte Carlo approach to volumetric absorption/scattering or antialiasing, or if we have mirror reflections. I think we should declare those things out of scope for now.
  • Modify mip_ping to support multiple textures processed in parallel.
  • Modify raytrace_to_texture to store world and UI pixels separately.
    • This will also enable the possibility of reprojection of the UI pixels, which is currently not useful but will be useful when the UI contains any scrolling/panning.
  • Think about how we want to approach accumulating from previous frames.

kpreid avatar Aug 17 '25 21:08 kpreid