nerf-pytorch
nerf-pytorch copied to clipboard
Results of Google Colab's example
Hi, nice work with NeRF, but I have a question: I try to run the example on https://colab.research.google.com/drive/1L6QExI2lw5xhJ-MLlIwpbgf7rxW7fcz3, but even with 92.450 iter the result is close to 0.17 to Val Loss and 7.6 to PSNR without images for coarse and fine. So, why did this happen? or did I lose something? Adjunct an image with this.

Another thing I did was uncommenting ReplicateNeRFModel, because another error appeared with this;

Only with this action did the example run
model_coarse = ReplicateNeRFModel(
hidden_size=128,
num_encoding_fn_xyz=num_encoding_fn_xyz,
num_encoding_fn_dir=num_encoding_fn_dir,
include_input_xyz=include_input_xyz,
include_input_dir=include_input_dir
)
# model_coarse = VeryTinyNeRFModel()
model_coarse.to(device)
# Initialize a fine-resolution model, if specified.
model_fine = ReplicateNeRFModel(
hidden_size=128,
num_encoding_fn_xyz=num_encoding_fn_xyz,
num_encoding_fn_dir=num_encoding_fn_dir,
include_input_xyz=include_input_xyz,
include_input_dir=include_input_dir
)
# model_fine = VeryTinyNeRFModel()
model_fine.to(device)
Thx
I also encountered this problem. It seems to be caused by bad initialization and sampling or ReLU so the training is stuck in local optima. Changing the seed (e.g., uncomment seed=1234
) will work
I also encountered this problem. It seems to be caused by bad initialization and sampling or ReLU so the training is stuck in local optima. Changing the seed (e.g., uncomment
seed=1234
) will work
Thanks. How did you figure this out? You are amazing.