transformers
transformers copied to clipboard
SageMaker Sharded Data Parallel Support for Trainer
What does this PR do?
This PR adds support for SageMaker Sharded Data Parallel with SMP version >= 1.15.
We mainly follow Deepspeed's checkpointing logic in our integration.
When sharded data parallel is enabled, we have special checkpointing logic.
We do not save the full model by default (with save_model
) as it is an expensive synchronization action like Deepspeed. Instead, the user can enable full model saving by adding an environment variable called HF_TRAINER_SMP_SDP_SAVE_FULL_MODEL
in their user script.
SageMaker Model Parallel saves these SDP partial checkpoints in a folder with a "_partial" appended to the input tag.
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the contributor guideline, Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
@sgugger
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.