Calculation accuracy issue about 'mlx.core.arccos'
Describe the bug
float32 should have higher computational accuracy than float16, but the result of 'mlx. core. arccos' is not.
Using tf.math.acos in TensorFlow to achieve the same function, it can be seen that float32 has higher accuracy in the calculation results.
To Reproduce
Include code snippet
import mlx.core as mx
a = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float16)
b = mx.arccos(a)
print(b)
c = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float32)
d = mx.arccos(c)
print(d)
Expected behavior float32 has higher precision calculation result
Desktop (please complete the following information):
- OS Version: MacOS 14.2.1
- Version 0.7.0
I think this is from the fact that we only display a few digits. If you want to see the full precision output:
c = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float32)
d = mx.arccos(c)
print(d.tolist())
Gives
[nan, nan, 2.094395160675049, 1.0471975803375244, nan, nan]
Thanks for explaining.
PS if having higher precision printing is important to you feel free to open an issue. We have a PR already to work on formatting of outputs, so that is definitely something we could look into adding.
Higher precision printing is indeed important to me because sometimes a lot of manual testing is required, and it can provide a more intuitive observation of the results of the operator/API. Can I reopen this issue or do I need to create a new issue regarding printing accuracy. @awni
I will reopen it and change the title.