mlx icon indicating copy to clipboard operation
mlx copied to clipboard

Support for mlx.float64

Open kyrollosyanny opened this issue 1 year ago • 10 comments

Describe the bug Would it be possible to support float64 types. For some numerical simulations, having float64 is important for the accuracy of the simulation. The goal is to use mlx for automatic differentiation in these types of scenarios.

kyrollosyanny avatar Mar 06 '24 22:03 kyrollosyanny

Double isn’t possible in Metal. In theory we could do it on the CPU only, but that is likely a lot less interesting to you?

awni avatar Mar 06 '24 22:03 awni

For some simulations and optimizations, cpu is more than enough. If it is possible to add that support in future versions, that would be great. Thanks a lot.

kyrollosyanny avatar Mar 06 '24 23:03 kyrollosyanny

Sounds good, I'll leave this open for now as a possible enhancement. I don't know if we will do it, but people can comment here with use cases etc to help us prioritize.

awni avatar Mar 07 '24 01:03 awni

It would be so grateful and helpful if the float64 will be added, there is a similar issue appearing in my scientific simulation using 'mps' in pytorch.

Andyuch avatar Jun 05 '24 17:06 Andyuch

I totally agree, I also need float64 on the CPU for numerical experiments. I compare float64 results with other frameworks (e.g. torch, jax or numpy). Currently I can not include MLX framework to my numerical experiments. It would be very helpful if float64 will be added.

beratuna avatar Aug 12 '24 08:08 beratuna

@awni regarding the "metal doesn't support double". Is that a fundamental hardware limitation that will never be overcome or is it the current version of metal? And do you know of any plans to support double.

kyrollosyanny avatar Aug 21 '24 19:08 kyrollosyanny

It is a limitation of the hardware / metal stack. It's unlikely we will have a float64 GPU back-end anytime in the fore-seeable future. A float64 CPU is doable.. but less clear what the benefit of that is over using say numpy. But feel free to comment if there is some benefit there and we can prioritize accordingly.

awni avatar Aug 21 '24 20:08 awni

Unfortunately the lack of float64 limits the range of scientific problems that can be tackled. Would MLX support for float64 on CPU offer any benefit over using numpy when converting back-and-forth between float64 and float32 in scenarios of mixed precision codes where the bulk of the computations are done on the CPU and only certain solvers are done on GPU for speed?

sck-at-ucy avatar Aug 21 '24 20:08 sck-at-ucy

Would MLX support for float64 on CPU offer any benefit over using numpy when converting back-and-forth between float64 and float32 in scenarios of mixed precision codes where the bulk of the computations are done on the CPU and only certain solvers are done on GPU

Indeed that's a good example of where it might be useful to do stuff in native MLX even on the CPU vs NumPy.

awni avatar Aug 21 '24 23:08 awni

The main advantage I see for cpu float64 mlx vs numpy is the differentiability. For many scientific computing problems (optical ray tracing, nanophotonics, fluid simulations and others), float64 is needed for accuracy and differentiability will be very desirable for inverse design and system analysis. Also, for many of these "design and analysis problems", cpu speed is good enough to run a lot of optimization problems.

Finally, I know little about the metal stack, but I wanted to share this GitHub (https://github.com/philipturner/metal-float64) for a metal-float64 library in case it is useful for brainstorming.

Thanks a lot :)

kyrollosyanny avatar Aug 23 '24 17:08 kyrollosyanny