heat
heat copied to clipboard
[Bug]: Precision loss when using `float64` data type
What happened?
Loss of precision is observed in several Heat functions when using the float64
data type. This is due to pre-conversion to float32
(the pytorch default floating data type) before converting the resulting tensor back to float64
.
The following functions are affected:
-
arange
-
array
-
linspace
-
abs
Code snippet
>>> import heat as ht
>>> ht.arange(16777217.0, 16777218, 1, dtype=ht.float64)
DNDarray([16777216.], dtype=ht.float64, device=cpu:0, split=None)
>>> ht.array(16777217.0, dtype=ht.float64)
DNDarray(16777216., dtype=ht.float64, device=cpu:0, split=None)
Version
main (development branch)
Python version
3.10
PyTorch version
1.11