nerf
nerf copied to clipboard
dists = dists * tf.linalg.norm(rays_d[..., None, :], axis=-1)
Hi,
Why multiply each distance by the norm of the light in its corresponding direction to convert it into a real-world distance?
dists = dists * tf.linalg.norm(rays_d[..., None, :], axis=-1)
Can you explain the principle behind it?
Thanks.
There is an explanatory diagram I've plotted. I'm not 100% sure it's a correct explanation, but this is how I understand it:
There is an explanatory diagram I've plotted. I'm not 100% sure it's a correct explanation, but this is how I understand it:
I think it is correct.
Hi,
I think the norm
should multiply the ray_d
before it adds with ray_o
, which seems more reasonable.
dists
are relative to the direction vector size.
So for example, those 3 cases are exactly same for a given ray:
- ray direction = (1, 2, 3), dist = 5
- ray direction = (2, 4, 6), dist = 2.5
- ray direction = (5, 10, 15), dist = 1
To normalize the dists to a unit-vector length, this normalization is performed
I think we can refer to the sample point generation process to understand the principle .
pts = rays_o[..., None, :] + rays_d[..., None, :] * z_vals[..., :, None]
and the input of dists
is sub by z_vals
。