drjit
drjit copied to clipboard
Tensor: fix constructor selection logic
Hi!
This is a small fix for the Tensor
constructor selection logic.
Currently, the following could happen, which is very confusing:
import mitsuba as mi
color = mi.Color3f(2.)
dr.enable_grad(color)
tensor = mi.TensorXf(color)
assert dr.grad_enabled(tensor) # <-- fails
The issue is that Tensor
should not be constructible from Color3f
in the first place, but because the constructor selection logic does not realize that Color3f
is a kind of DrJit Array, it falls back to the __array_interface__
constructor.
I changed the logic to use the IsDrJit
attribute, which is the criterion used by dr.is_array_v()
.
Hi @merlinND :smile:
We've ran into this at least a few times already, without ever properly solving it.
Rather than returning an error, and potentially breaking someone's existing code that doesn't rely on AD. I'd prefer to figure out the generic set of scatter
operations we'd need to handle these castings into TensorXf
objects without breaking gradient tracking.
Does this seem reasonable to you? I can take a look into this in a few days.
Agreed, actually being able to construct the TensorXf
from those higher-order arrays would be even cooler!
IMO the really important part is that we never fallback to __array_interface__
or similar for DrJit arrays.
Would dr.ravel()
do the trick?
Would dr.ravel() do the trick?
Yes! It already figures out the necessary scatters
with proper gradient tracking.