gsplat icon indicating copy to clipboard operation
gsplat copied to clipboard

Can I get Z-buffer-like Depth?

Open PeizhiYan opened this issue 9 months ago • 1 comments

I’m working on a project that blends rendered 3D meshes and 3D Gaussians into a single image. My current solution is straightforward: I use the depth maps from each method and blend pixels based on which object is closer. My code is: https://github.com/PeizhiYan/gmesh/blob/99955af9429792a8350521f3e5ab64887b2d3196/gmesh/utils/image_utils.py#L36

However, since the depth map from the 3D Gaussian Splatting renderer is computed as a weighted average (the “expected depth” or ED mode), the results can be inaccurate, especially when multiple Gaussians overlap at a pixel. In these cases, a distant Gaussian can “drag” the pixel depth backward, even if a closer Gaussian or the mesh is visually dominant. This leads to artifacts and incorrect compositing in the final image (see attached figure, the Gaussians are in Green, and Mesh in Blue).

Image

Is there a recommended way to achieve more “z-buffer-like” depth behavior for hybrid rendering, so that the pixel depth is determined by the closest visible contributor?

Any advice or best practices would be greatly appreciated!

PeizhiYan avatar Jul 04 '25 00:07 PeizhiYan

I am wondering if it would be possible to pass the RGB image and depth map rendered from the 3D mesh as background inputs to the Gaussian rasterization function. Then, in the Gaussian rasterization, the per-pixel background color and depth could be used for compositing:

  • At each pixel, if the Gaussian is in front of the mesh (based on depth), the Gaussian color/opacity is composited over the mesh’s RGB.
  • If the mesh is closer, the mesh’s color remains dominant.

This would allow for more accurate hybrid rendering and occlusion between meshes and Gaussians.

Image

PeizhiYan avatar Jul 04 '25 04:07 PeizhiYan