nerfstudio icon indicating copy to clipboard operation
nerfstudio copied to clipboard

How to train depth-nerfacto using monocular depth?

Open YuiNsky opened this issue 1 year ago • 5 comments
trafficstars

Thanks for the great work of this project, i wanna know how to use monocular estimated depth to supervise the training? since colmap depth is too sparse

YuiNsky avatar Jan 31 '24 22:01 YuiNsky

I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:

  1. Run colmap depth and get the sparse points
  2. Run the monocular depth model on the images.
  3. For each image, compute a scale factor and offset with respect to the colmap points seen in each image.
  4. Multiply the depth image from the model by the scale factor and add the offset
  5. Train the nerf!

peasant98 avatar Feb 06 '24 00:02 peasant98

I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:

  1. Run colmap depth and get the sparse points
  2. Run the monocular depth model on the images.
  3. For each image, compute a scale factor and offset with respect to the colmap points seen in each image.
  4. Multiply the depth image from the model by the scale factor and add the offset
  5. Train the nerf!

As you mentioned, without having depth information, depth-nerfacto uses ZOE to estimate the depth. However, when I run it, I encounter the following error. Any feedback?

/nerfstudio/models/depth_nerfacto.py", line 86, in get_metrics_dict
    raise ValueError(
ValueError: Forcing pseudodepth loss, but depth loss type (DepthLossType.DS_NERF) must be one of (<DepthLossType.SPARSENERF_RANKING: 3>,)

https://github.com/nerfstudio-project/nerfstudio/blob/242c23f0f067064c16c49376c02271cd1cd2303b/nerfstudio/models/depth_nerfacto.py#L79-L88

aeskandari68 avatar Feb 06 '24 13:02 aeskandari68

I believe depth nerfacto is hooked up to DepthDataset, where if you don't provide any depth data, then ZOE (state of the art monocular depth estimation) depth images will be generated. To be honest though, I would recommend a way that I think will perform better:

  1. Run colmap depth and get the sparse points
  2. Run the monocular depth model on the images.
  3. For each image, compute a scale factor and offset with respect to the colmap points seen in each image.
  4. Multiply the depth image from the model by the scale factor and add the offset
  5. Train the nerf!

As you mentioned, without having depth information, depth-nerfacto uses ZOE to estimate the depth. However, when I run it, I encounter the following error. Any feedback?

/nerfstudio/models/depth_nerfacto.py", line 86, in get_metrics_dict
    raise ValueError(
ValueError: Forcing pseudodepth loss, but depth loss type (DepthLossType.DS_NERF) must be one of (<DepthLossType.SPARSENERF_RANKING: 3>,)

https://github.com/nerfstudio-project/nerfstudio/blob/242c23f0f067064c16c49376c02271cd1cd2303b/nerfstudio/models/depth_nerfacto.py#L79-L88

Try setting --pipeline.model.depth-loss-type SPARSENERF_RANKING

MartinEthier avatar Feb 06 '24 18:02 MartinEthier