nerfstudio
nerfstudio copied to clipboard
Instant-NGP epsilon produces worse results early in training
Related to PR #134 and Issue #117 cc: @brentyi
eps: 1e-4
iter 200:

iter 500

eps: Default
iter 200:

iter 500:

Hmm yeah ideally we can lower the eps value, which I'd guess is doable with one or both of:
- Weight decay
- It's possible this would let us swap the softplus density activation back in too, since the trunc exp also seemed to hurt the training curve early on (see curve in #104)
- Mixed-precision (via
torch.amp?) / track gradient stats in float32
Outdated