TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

Add a CP implementation variant with KV all-gather.

Open xrennvidia opened this issue 6 months ago • 3 comments

Description

This is a CP implementation variant with KV all-gather. Currently, it can support:

  • sliding window attention + causal + FlashAttention
  • full window attention + causal + FlashAttention
  • full window attention + causal + FusedAttention

Will add more functionality support later.

The KV all-gather communication is exposed, but the overheads should be small with GQA/MQA.

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refractor

Changes

Please list the changes introduced in this PR:

  • Add cp_comm_type in attention.py to allow users choose which CP implementation they want.
  • Add unit tests for the new implementation variant.

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [ ] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [x] My changes generate no new warnings
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes

xrennvidia avatar Jul 30 '24 20:07 xrennvidia