heat
heat copied to clipboard
Add float16 support to types
PyTorch currently implements fp16 support for GPUs only. This issue will have to be resolved after HeAT has device support.
First implementation idea is to check the device the HeAT tensor shall be allocated on and not allow CPU backend.
PyTorch has basic operations for float16 and bfloat16 by now. The function support, however, has as many holes as a Swiss cheese.
The function support, however, has as many holes as a Swiss cheese.
Yes, I tried to add float16 but later many test were failed as torch.ceil
, torch.floor
and many more function were not implemented for float16.
This issue is still open, and many torch functions still don't support Half
data type.
Proposal:
- introduce
ht.float16
datatype - introduce a @SkipWithHalf (or similar) decorator for those tests that otherwise will fail
- whenever a new pytorch version is out, we test without the decorator and update.
Reviewed within #1109
Branch 94-Add_float16_support_to_types created!