Mathics icon indicating copy to clipboard operation
Mathics copied to clipboard

LLVM support

Open gtusr opened this issue 2 years ago • 7 comments

Are there any plans to introduce support for compiling Mathics code down to LLVM?

If so, this would allow Mathics to leverage work that is taking place to bring automatic differentiation to the platform:

Automatic Differentiation for LLVM-hosted languages

gtusr avatar Dec 13 '22 03:12 gtusr

The link is off

TiagoCavalcante avatar Dec 13 '22 03:12 TiagoCavalcante

Link is now updated.

Adding a link to a native Wolfram Language AD package that may also work in Mathics:

Dual Numbers Package for Automatic Differentiation in Mathematica

gtusr avatar Dec 26 '22 21:12 gtusr

Thanks for the link, information and idea.

In the short term though I don't think this can be done in the code base as is. To first approximation, think of Mathics as just a front-end to SymPy, SciKit, PIL, mpmath and so on. We would most benefit if this were done inside one of these other packages that we depend on, rather than doing it one level removed from that.

Let me elaborate a little on this. We do have the ability to compile down to LLVM via Cython and compiled functions. But at the LLVM level Mathics variables are seen as being not typed. I guess they would be seen as pointers to objects. The way you get the variable would be the result of a call.

Right now, the Mathics interpreter is a tree interpreter of M-expressions. For this to have conceivable benefit we would have to somehow turn a Mathics expression into an equivalent Python expression with types included and then that could be compiled down to LLVM and then this work would see enough structure to be able to do something with it.

The Dual Numbers Package that you site uses Automatic Differentiation that is already built into Mathematica. It reimplements Numbers in a way that Mathematica's engine can use its existing Automatic Differentiation on.

rocky avatar Dec 26 '22 23:12 rocky

Actually, we have a llvmlite compiler with a limited support for arithmetic functions as well as some loop and control keywords. Look at the mathics.builtin.compile module

mmatera avatar Dec 27 '22 01:12 mmatera

Actually, we have a llvmlite compiler with a limited support for arithmetic functions as well as some loop and control keywords. Look at the mathics.builtin.compile module

I don't think that will be very useful for exactly the reasons cited above: wishful thinking that a compiler is going to untangle levels of abstraction, some of which are the root cause of the inefficiency in the first place .

But prove me wrong!

rocky avatar Dec 27 '22 01:12 rocky

The inneficient part is the standard evaluation. Then, if you evaluate Compile over the evaluated expression, of the expression is not too complicated, you can get a llvmlite compiled code, that you could take and use in another place. So, the inneficient evaluation process translates into an inneficient compiler, but the resulting object code should be ok.

mmatera avatar Dec 27 '22 02:12 mmatera

This could be useful https://eli.thegreenplace.net/2015/calling-back-into-python-from-llvmlite-jited-code/

mmatera avatar Dec 27 '22 02:12 mmatera