torchtune
torchtune copied to clipboard
llama 3.1 has correct `max_seq_len` for all versions
Context
What is the purpose of this PR? Is it to
- [ ] add a new feature
- [x] fix a bug
- [ ] update tests and/or documentation
- [ ] other (please add here)
Please link to any issues this PR addresses. #2202
Changelog
What are the changes made in this PR?
- Ensure all llama 3.1 instantiations uses the correct
max_seq_lenthe model was originally trained with based on the HF config
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
- [x] run pre-commit hooks and linters (make sure you've first installed via
pre-commit install) - [ ] add unit tests for any new functionality
- [ ] update docstrings for any new or updated methods or classes
- [ ] run unit tests via
pytest tests - [ ] run recipe tests via
pytest tests -m integration_test - [x] manually run any new or modified recipes with sufficient proof of correctness
- [ ] include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it. Here is a docstring example and a tutorial example
- [x] I did not change any public API
- [ ] I have added an example to docs or docstrings
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2203
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:white_check_mark: No Failures
As of commit 11fc9df63b7214014123d34a577f5a9289a57ee5 with merge base aa8f365f91a69aa36aaea14cf6f03ccd45310bb6 ():
:green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
thanks for the PR, do you mind updating the docstrings too, so they include the new arg? I think that for 405B the default is wrong, it should be 131k.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 23.85%. Comparing base (
aa8f365) to head (d2f581c).
Additional details and impacted files
@@ Coverage Diff @@
## main #2203 +/- ##
=======================================
Coverage 23.85% 23.85%
=======================================
Files 344 344
Lines 20658 20658
=======================================
Hits 4928 4928
Misses 15730 15730
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
This seems inconsistent with our other builders, which do not specify a max seq len parameter. If you want to change the default max seq len for llama3_1 you should use the base builder llama3_1() instead of the 8b and 405b builders, as they're meant to expose minimal parameters and give all the defaults to get the correct model. But let us know if this is too prohibitive.
@felipemello1 @RdoubleA thank you for the comments, can you please check the updated implementation? I set the correct default from the HF model config for Llama 3.1 models in the _model_builders file.
I am on PTO this week, but if Rafi doesnt approve it by next week, i will review it. @RdoubleA , you said "If you want to change the default max seq len for llama3_1 you should use the base builder llama3_1()", that is true, but if the model is the LoRA, this parameter is not exposed. IMO, all model parameters should be exposed in the lora builder. Do you disagree?
Commented on the issue as well, but I think we should only change the llama3_1_405b and lora_llama3_1_405b builders here. If we change llama3_1 to hardcode max_seq_len it'd be inconsistent with our other models
@akashc1 thanks for catching the bug. Do you want to push the changes I mentioned? (Basically your PR should just set max_seq_len=131072 in llama3_1_405b and lora_llama3_1_405b.) If not let me know and I can push to your PR.