lava-dl
lava-dl copied to clipboard
Support for bfloat16 Data Types with PyTorch Automatic Mixed Precision
User story
As a user, I want to use bfloat16
data types in lava-dl
, with compatibility for PyTorch's torch.amp
(Automatic Mixed Precision), to accelerate inference and training processes while maintaining numerical accuracy. This will allow for efficient computation and memory savings, leveraging the mixed precision capabilities of PyTorch to optimize performance for large-scale spiking neural networks (SNNs).
Conditions of satisfaction
- The software should support
bfloat16
data types for all relevant operations, including both training and inference. - Integration with
torch.amp
should be seamless, allowing users to easily switch betweenfloat32
andbfloat16
or use automatic mixed precision without significant code changes. - The numerical stability and accuracy of operations with
bfloat16
should be validated, ensuring compatibility with PyTorch's mixed precision training workflows. - Documentation should include guidelines on using
bfloat16
withtorch.amp
, any limitations, and best practices for users.