HashNeRF-pytorch
HashNeRF-pytorch copied to clipboard
sparsity_loss = entropy
When I run this code with the synthetic Lego dataset, it works fine. But when I run it with the llff dataset, I encounter the following issue: python run_nerf.py --config configs/fren.txt --finest_res 512 --log2_hashmap_size 19 --lrate 0.01 --lrate_decay 10 0 0.0010004043579101562 [00:00<?, ?it/s] [1/1] [99%] c:\users\nezo\desktop\3d\hashnerf-pytorch\run_nerf.py(379)raw2outputs() -> sparsity_loss = entropy Could you please tell me the reason for this issue?
I had the same problem. I'm not sure why, but I retrained without changing anything, and it worked. May be a problem in the initialization and optimization of neural network parameters, with randomness?
I also encountered the same problem on the self-made data set. Is there something wrong with the data?
I think the problem is caused by weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex) of Hash_encoding!You can change it into weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex+1e-6) and try it again!
May I ask everyone, why do we add this loss function and what is its purpose?