NeMo
NeMo copied to clipboard
Sequence Parallel for LoRA
What does this PR do ?
Allow sequence parallel for lora training
Collection: NLP
Changelog
- Set sequence_parallel flags to False in LoRA layers to allow gathering output in LoRA
linear_in
- Do a manual all_gather (reduce_scatter during backward) along the sequence dimension before each lora module since the previous layer is a layernorm
Jenkins CI
To run Jenkins, a NeMo User with write access must comment jenkins
on the PR.
Before your PR is "Ready for review"
Pre checks:
- [ ] Make sure you read and followed Contributor guidelines
- [ ] Did you write any new necessary tests?
- [ ] Did you add or update any necessary documentation?
- [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
- [ ] Reviewer: Does the PR have correct import guards for all optional libraries?
PR Type:
- [x] New Feature
- [ ] Bugfix
- [ ] Documentation
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information
- Related to # (issue)
jenkins
jenkins
jenkins
jenkins
Known issue: model.ub_tp_comm_overlap=True
is not supported for LoRA + SP
jenkins