Adding support for Llama2 70B LoRA finetuning
Context
As per title
Changelog
- Builder function + config
Test plan
- Trained for one epoch with the following loss
- Training Speed
- Memory
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/788
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:white_check_mark: No Failures
As of commit bda498dc39c1764372c9da26e91f8fcb131a51de with merge base a7b507b888a49516700e9e473d4c24d04544d140 ():
:green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
Codecov Report
Attention: Patch coverage is 50.00000% with 2 lines in your changes are missing coverage. Please review.
Project coverage is 27.53%. Comparing base (
a878f3b) to head (0f0df12). Report is 18 commits behind head on main.
| Files | Patch % | Lines |
|---|---|---|
| torchtune/models/llama2/_model_builders.py | 50.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #788 +/- ##
==========================================
+ Coverage 27.48% 27.53% +0.05%
==========================================
Files 144 143 -1
Lines 5966 5937 -29
==========================================
- Hits 1640 1635 -5
+ Misses 4326 4302 -24
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.