mpmath
mpmath copied to clipboard
Implement exp() method on mpf/mpc objects?
Some background: I'm working extensively with numpy arrays with an object dtype which contain mpmath numbers. I found this to be much more convenient and general than the rather limited vector/matrix support in mpmath itself, even though it requires workarounds in some cases.
There are some simple things which could be done to improve interoperability with numpy arrays, and this issue is one example of that.
Consider this:
n = 9
nodes = np.array([mp.mpc(2j) * k / n * pi) for k in range(0, n)]) # numpy array of mpmath complex numbers
np.exp(nodes)
This currently fails with an error on a missing callable exp() method on the mpc type. In fact, this is easy to fix: one would only need to add a method like
def exp(self):
return self.context.exp(self)
to mpc, and then the above numpy code does exactly what one would expect it to do. A bigger problem is in the mpf object, which already has an exp member, but it is not a function, but apparently a part of its core data structure.
So essentially there is a general and a concrete question here:
- Is there interest at all in improving the numpy interoperability?
- Is the
expmember ofmpfpart of its public API, or could it be renamed to make space for anexp()method?
Note that, e.g., np.abs(nodes) already works as expected. Other features like np.sqrt(nodes) could be supported analogously to this suggestion.
I would not mind doing this to improve compatibility, but I think some kind of deprecation warning is needed when replacing the existing mpf.exp property.
There's maybe an issue with the Sage backend as well.
Alternatively would it not be better to improve mpmath's matrix capabilities to do whatever is needed?
There isn't really an established protocol in Python for how to overload functions like exp (think how many other functions mpmath supports).
@oscarbenjamin I don't think it's practical to build an alternative implementation that can rival numpy arrays. Arbitrary dimension arrays, slicing in all its generality, broadcasting, hundreds of existing functions for array manipulation - all these things would take considerable effort to implement and make efficient. And you get all of that for free by simply using what is there in numpy. Frankly, I don't see the benefit of a package like mpmath implementing its own vectors and matrices.
It's true that there is no officialy defined protocol how to overload these functions, but by virtue of numpy existing, there is a de facto standard, which simply requires implementing these methods. Here's a partial list of methods that could be supported analogously: sqrt, exp, sin, cos, tan, sinh, cosh, tanh, arcsin, arccos, arcsinh, arccosh, arctan, arctanh.
#250 - for sqrt
I doubt it's a good idea. As it was noted above, there is no established protocol for overloading functions like exp.
And you could use np.vectorize for arbitrary scalar functions:
>>> n = 3
>>> nodes = np.array([mp.mpc(2j) * k / n * np.pi for k in range(n)])
>>> np.vectorize(mp.exp)(nodes)
array([mpc(real='1.0', imag='0.0'),
mpc(real='-0.49999999999999978', imag='0.86602540378443871'),
mpc(real='-0.50000000000000044', imag='-0.86602540378443837')],
dtype=object)
No compatibility breaks, no magic lists of "methods that could be supported", probably even slightly efficient approach.
I'm closing this issue. Feel free to reopen if you have more arguments.