lava-dl icon indicating copy to clipboard operation
lava-dl copied to clipboard

Support for bfloat16 Data Types with PyTorch Automatic Mixed Precision

Open agvico opened this issue 4 months ago • 0 comments

User story

As a user, I want to use bfloat16 data types in lava-dl, with compatibility for PyTorch's torch.amp (Automatic Mixed Precision), to accelerate inference and training processes while maintaining numerical accuracy. This will allow for efficient computation and memory savings, leveraging the mixed precision capabilities of PyTorch to optimize performance for large-scale spiking neural networks (SNNs).

Conditions of satisfaction

  • The software should support bfloat16 data types for all relevant operations, including both training and inference.
  • Integration with torch.amp should be seamless, allowing users to easily switch between float32 and bfloat16 or use automatic mixed precision without significant code changes.
  • The numerical stability and accuracy of operations with bfloat16 should be validated, ensuring compatibility with PyTorch's mixed precision training workflows.
  • Documentation should include guidelines on using bfloat16 with torch.amp, any limitations, and best practices for users.

agvico avatar Oct 24 '24 07:10 agvico