mlx icon indicating copy to clipboard operation
mlx copied to clipboard

Calculation accuracy issue about 'mlx.core.arccos'

Open Redempt1onzzZZ opened this issue 1 year ago • 5 comments

Describe the bug float32 should have higher computational accuracy than float16, but the result of 'mlx. core. arccos' is not. Using tf.math.acos in TensorFlow to achieve the same function, it can be seen that float32 has higher accuracy in the calculation results. image image

To Reproduce

Include code snippet

import mlx.core as mx

a = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float16)
b = mx.arccos(a)
print(b)

c = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float32)
d = mx.arccos(c)
print(d)

image

Expected behavior float32 has higher precision calculation result

Desktop (please complete the following information):

  • OS Version: MacOS 14.2.1
  • Version 0.7.0

Redempt1onzzZZ avatar Jan 16 '24 12:01 Redempt1onzzZZ

I think this is from the fact that we only display a few digits. If you want to see the full precision output:

c = mx.array([-2.5, -1.5, -0.5, 0.5, 1.5, 2.5],dtype=mx.float32)
d = mx.arccos(c)
print(d.tolist())

Gives

[nan, nan, 2.094395160675049, 1.0471975803375244, nan, nan]

awni avatar Jan 16 '24 14:01 awni

Thanks for explaining.

Redempt1onzzZZ avatar Jan 16 '24 14:01 Redempt1onzzZZ

PS if having higher precision printing is important to you feel free to open an issue. We have a PR already to work on formatting of outputs, so that is definitely something we could look into adding.

awni avatar Jan 16 '24 14:01 awni

Higher precision printing is indeed important to me because sometimes a lot of manual testing is required, and it can provide a more intuitive observation of the results of the operator/API. Can I reopen this issue or do I need to create a new issue regarding printing accuracy. @awni

Redempt1onzzZZ avatar Jan 17 '24 01:01 Redempt1onzzZZ

I will reopen it and change the title.

awni avatar Jan 17 '24 01:01 awni