neuralangelo icon indicating copy to clipboard operation
neuralangelo copied to clipboard

Performance Variations when Reproducing Results on DTU Dataset

Open Dangzheng opened this issue 2 years ago • 2 comments

Hello and thank you for the insightful work you've shared!

While attempting to reproduce your results on the DTU dataset, I noticed some variations in performance.

Below is a result summarization: image

Interestingly, only case 106 aligns with the paper's results, while the others seem to diverge.

Implementation Details:

  1. I've used the provided Docker file, with a minor modification: pulling image FROM nvcr.io/nvidia/pytorch:22.12-py3 due to my NVIDIA driver version (11.8).

  2. The evaluation method was in line with the NeuS paper. The alignment of case 106's results with the paper gives me confidence in the accuracy of my evaluation.

Questions:

  1. Might there be specific package versions that are critical to achieving the results? If so, could you kindly share those details?

  2. Are there other aspects or considerations that might explain the variations in performance?

Your guidance would be immensely appreciated. Thank you in advance for your time and expertise!

Dangzheng avatar Aug 23 '23 09:08 Dangzheng

Hi @Dangzheng @Runsong123

Thanks for reporting. We will look into this. A few quick comments:

  • We followed NeuralWarp's evaluation protocol. The details are described in Appendix D 3rd section of the paper.
  • Based on @Runsong123's result, qualitatively DTU 24 matches with the paper but DTU 63/69 are quite different (Figure 10. of the paper). We are looking into releasing the meshes from the paper to help get a sense of the expected results.

mli0603 avatar Aug 27 '23 19:08 mli0603

Hi @mli0603 , Any update for mesh release?

jk4011 avatar Dec 07 '23 01:12 jk4011