TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

Save CUDA Graph memory by reusing input and output tensors

Open buptzyb opened this issue 1 year ago • 0 comments

Description

TE allows for passing multiple callables into make_graphed_callables() to share cudagraph pool to save memory. However, each cudagraph has its own input and output data buffer. This causes large memory overhead especially when _order is specified so that we must capture graphs for all microbatches. This PR allows for reusing the input or output tensors among cuda graphs captured in one make_graphed_callables() according to the pattern of _order. Note that reusing output tensors will cause extra DtoD memory copy in cudagraph execution.

Fixes # (issue)

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refractor

Changes

Please list the changes introduced in this PR:

  • Add reuse_graph_inputs option to make_graphed_callables() API.
  • Add reuse_graph_outputs option to make_graphed_callables() API.

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [x] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [x] My changes generate no new warnings
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] New and existing unit tests pass locally with my changes

buptzyb avatar Oct 09 '24 13:10 buptzyb