TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

[PyTorch] Fix cuBLAS workspace leak in applications that initialize+destroy Userbuffers more than once

Open denera opened this issue 8 months ago • 3 comments

Description

In cases where initialize_ub()+destroy_ub() pairs are called more than once (e.g. in-process restarts), the cuBLAS workspace allocation is mishandled and grows exponentially. This PR safeguards the workspace expansion in initialize_ub() to avoid this leak.

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [x] Bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refactoring

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [x] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [x] My changes generate no new warnings
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes

denera avatar Apr 23 '25 22:04 denera

/te-ci pytorch L0 L1

denera avatar Apr 25 '25 18:04 denera

Confirmed offline that this fixes the issue of GPU memory not being reclaimed after user buffer cleanup (destroy_ub).

ksivaman avatar Apr 25 '25 20:04 ksivaman

Pipeline 27525544

ksivaman avatar Apr 25 '25 20:04 ksivaman