stanford-shapenet-renderer icon indicating copy to clipboard operation
stanford-shapenet-renderer copied to clipboard

A slight bias of output depth maps

Open zParquet opened this issue 10 months ago • 3 comments

Hello, I met a problem that the output depth map seems a slight offset from the ground truth depth.

One example is that I rendered 90 views around a ShapeNet model and conducted tsdf fusion using the output depth map in EXR format. The yellow image below shows the fusion result, and the gray image shows the GT mesh. The two shape looks totally same, but when overlapping them, the fusion result is significantly thicker than the GT a little.

image

I found this phenomenon because I am tring to train a module of my network, supervised by very accurate depth maps. I just found a small bias in the results of my trained network, and ultimately found that it was caused by inaccurate depth maps. It did bother me for a while. Do you have any opinion on this?

PS. The output depth map is in 16 bit, OPEN_EXR format.

zParquet avatar Aug 29 '23 14:08 zParquet

I also once had such suspicions with blender depth renderings, but then did not investigate it further. Therefore I am very interested in an explanation of the phenomenon.

I am not really familiar with TSDF Fusion. Do you have to provide both the intrinsic and extrinsic camera parameters to the algorithm?

mvoelk avatar Aug 30 '23 15:08 mvoelk

I later found there is no problem in the rendering script. The problem is that tsdf fuison results in the reconstructed mesh a little thicker. However, the rendered depth map is accurate without doubt.

zParquet avatar Sep 03 '23 04:09 zParquet

maybe try changing engine to cycles? I found that eevee makes some rounding on depthmaps

ivanpuhachov avatar Sep 12 '23 16:09 ivanpuhachov