AdaNeRF
AdaNeRF copied to clipboard
Failed to train LLFF dataset
Hi! Thanks for your great work! I have hard time training AdaNeRF with LLFF dataset. (fern)
At first it gave me pretty nice rgb and depth images, but from epoch 50k it loses fern's nearest leaves and produces bulky images. I ran convert_llff.py with factor=8, and used same config file 'dense_training.ini'
If you used different configuration for LLFF dataset, could you share your config file? Or may I know any tips for training?
Hi!
Did you use dense_training.ini
or dense_training_ndc.ini
? For the LLFF datasets, we generated our results with NDC sampling, so you'll likely want to use dense_training_ndc.ini
first.
In particular, for Fern, we used lossWeights = [0.005, 1.0]
. If this does not resolve your issue, we can follow up with more debugging :)
Hi! Thank you for your quick response. I tried with dense_training_ndc.ini with lossWeights = [0.005, 1.0], and succeeded on dense training!
I have some additional questions, I also tried fine training with the ndc file, and I got some gray areas with small sample numbers ( < 16) Is there particular parameters, such as adaptiveSamplingThreshold, for Fern dataset?
And when I ran test.py, I think it takes me about 1 second to render depth and rgb image. (504x376 resolution) Is it slower than the results in the paper because I didn't use Tensor RT and cuda?
Awesome that you got the dense training to run, that's great news.
Could you maybe clarify what exactly you mean by "gray areas", i.e., which output has these gray areas? In the mean time we'll check to see if there are some parameters or other things that could cause such an issue.
About the test/inference performance: The results in our paper regarding run-time performance were computed on our TensorRT/CUDA viewer (which we will upload soon - stay tuned).
I'm very excited to hear test viewer will be uploaded soon!
I guess when my fine-tuned network fails to locate depth, it gets blurry or gives gray color.
In fern dataset, I got some gray part on top-left window, top-right ceiling, bottom-right floor, and some blurry leaves.
I had similar results on DONeRF dataset pavillon and classroom, the sky and ceiling were covered gray.
Comparing estimated_depth images, the gray areas are more yellow in dense training and more purple in fine training. (Probably yellow areas are closer and purple areas are farther in depth image?)
So I thought sampling network gave shading network wrong depth, and shading network failed to produce the right color.
I got satisfying result with sample_num=16, but I found it hard to fine-tune the network with smaller sample_num.
I tried training with adaptiveSamplingThreshold=0.2, 0.1, 0.001, lossWeights = [0.005, 1.0], and used latest relu and nerf weights of dense training for pretrained weights.