pyarray and pytensor constructor can break pybind11 overload resolution
Currently pyarray and pytensor constructors can throw exceptions when constructed from a pyobject (in case of bad layout or incorrect dimension for pytensor). This can stop pybind11 overload resolution mechanism, hence leading to a failed function call while possible valid other overloads exist.
For example, if we have two overloaded definitions of the foo function, one taking a 1d pytensor and the other taking a 2d pytensor:
m.def("foo", [](xt::pytensor<int, 1> a) { return 1; });
m.def("foo", [](xt::pytensor<int, 2> a) { return 2; });
trying to call:
foo(np.ones((2,2), dtype=np.int32)
will produce a runtime error ("NumPy: ndarray has incorrect number of dimensions") instead of using the second definition of foo.
A possible solution could be to swallow exception in pytensor and pyarray type casters and simply return false (cast has failed) to let pybind11 continue its job. However this solution would remove precious debugging information for end-users to understand why some function call fails.
I just experienced the same issue. Is there a plan to fix it recently? Thanks a lot!
Any suggestions on a workaround is also appreciated.
@PerretB @zhujun98 this completely fell off the cracks. Although it appears that a fix would be simple.
@PerretB @zhujun98 a Pull-request along the lines of what is described above would be very welcome. Feel free to go ahead and I will review it.
Hi, I remember discussing this on Gitter with probably @wolfv and @JohanMabille . I think that at that time we decided to keep better debugging info for users instead of providing more possibilities for the devs. Maybe, one solution could be to implement the "catch exception" solution and add extra optional debug info at request with a logging tool but I don't remember if there is a standard mechanism for this in xtensor*.