AMGX
AMGX copied to clipboard
Thoughts on supporting lower precision floats
Given the recent work on lower precision floating point being done by the PyTorch community, (See here: https://pytorch.org/blog/training-using-float8-fsdp2/?utm_content=317436495&utm_medium=social&utm_source=linkedin&hss_channel=lcp-78618366) has there been any thoughts on if AMGX will follow suite and support floats with less precision than 32 and 64?
As someone less familiar with the internals of AMGX, and more with the community-driven interfaces, where might one start in the code to make this addition?