JNeRF
JNeRF copied to clipboard
maybe some bugs?
- In
grid_encode.py,log2in Python is not exactly same withstd::log2in C++. This can lead to a mismatch inscalein some cases. - The
jt.nn.softplusis not the same as the softplus in PyTorch. I don't know if this will cause any problems. - In GridEncode, JNeRF implementation doesn't have the loss scale while tcnn has it (with loss_scale=128). This will cause gradient disappearance.
- When computing
alphain sdf, JNeRF uses safe_clip(0.0, 1.0). I think clamp_(0.0, 1.0) should be used. - The
cumprodfunction in jittor is unsafe! Have a look at PyTorch implementation or mine in Jeuralangelo. - JNeRF doesn't have the weight norm but I found it in Jittor's source. I don't know why.
I met No.3 error,loss increases sharply during training. do you have any solution to fix it?
I met No.3 error,loss increases sharply during training. do you have any solution to fix it?
refer to my code, it might help you.