torchtune
torchtune copied to clipboard
Aoti
Context
What is the purpose of this PR? Is it to
- [ ] add a new feature
- [ ] fix a bug
- [ ] update tests and/or documentation
- [ ] other (please add here)
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR? *
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
- [ ] run pre-commit hooks and linters (make sure you've first installed via
pre-commit install) - [ ] add unit tests for any new functionality
- [ ] update docstrings for any new or updated methods or classes
- [ ] run unit tests via
pytest tests - [ ] run recipe tests via
pytest tests -m integration_test - [ ] manually run any new or modified recipes with sufficient proof of correctness
- [ ] include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it. Here is a docstring example and a tutorial example
- [ ] I did not change any public API
- [ ] I have added an example to docs or docstrings
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1753
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:heavy_exclamation_mark: 1 Active SEVs
There are 1 currently active SEVs. If your PR is affected, please view them below:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
Download artifacts from https://www.internalfb.com/manifold/explorer/executorch/tree/models/llama/llama3_2_mm_v4
- tune.pth
- dog.jpg
To a directory say llama3_2_mm_v4/
Then run python export_flamingo.py llama3_2_mm_v4/.
Hi @ebsmothers thanks for asking. This is all the changes required to export llama 3.2 MM 11B using torch.aot_compile() or AOT inductor. I sent this out so we can work with PyTorch compiler folks to debug together. Most of the changes will live outside of torchtune and I'll send separate PRs if I think some of the changes should live in torchtune repo.