perform secondary fine-tuning on the basis of fine-tuning the first model
How can I perform secondary fine-tuning on the basis of fine-tuning the first model?
@xjy2020 thanks for creating the issue. In general you should be able to run a second fine-tuning script using the output paths of your first fine-tune. So e.g. for llama3/8B_lora you would want to modify these lines of the config to point to the directory and filename(s) of your first fine-tuned checkpoint.
Would user also need to swap out this: torchtune.utils.FullModelMetaCheckpointer
in Here especially if fine tuned llama3 was downloaded from HF?
Hi @bjohn22 it depends on the type of checkpoint that you download. So FullModelMetaCheckpointer checkpoints can still be downloaded from HF. For instance the tune download command given for our Llama3-8B configs (see e.g. here) will download Meta format checkpoints from HF. In that case you would still use FullModelMetaCheckpointer. Note that for Llama3-8B-Instruct the same model page contains both HF format and Meta format checkpoints. See here -- the HF format weights are in the safetensors files, while the Meta format weights are under the subdirectory original/.
For other community fine-tuned checkpoints on the hub, it may vary, but I suspect many will be in HF format. Btw you can also read our checkpointing deep-dive which covers this topic in more detail.
@xjy2020 let us know if you have any more questions or if the comments were not helpful! I'm closing this issue, but please feel free to reopen if there are more follow ups.