llama-recipes
llama-recipes copied to clipboard
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a...
# What does this PR do? Fixes # (issue) ## Feature/Issue validation/testing Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so...
### System Info PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu...
Updating notebooks to llama 3 when possible. Removing the AWS specific notebook on prompting Llama 2, moving to the quick started guide and leaving it as the only prompt eng...
# What does this PR do? Fixes # (issue) ## Feature/Issue validation/testing Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so...
Updating notebook to reflect Llama 2 or 3 usage by removing the llama version ## Feature/Issue validation/testing Not tested on Azure yet. Based on how Azure works, it should work...
Final OctoAI notebook updates in the recipes. ## Feature/Issue validation/testing Tested running the notebooks and verified they work as expected. ## Before submitting - [ ] This PR fixes a...
Thanks for the tutorials! I have several small questions about the model ft and usage. When doing Full parameter finetune using FSDP only, **Q1: should we use `save_optimizer` to True...
# What does this PR do? Fixes # (issue) ## Feature/Issue validation/testing Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so...
# What does this PR do? Fixes # (issue) ## Feature/Issue validation/testing Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so...