Lora Generate?
Does the generation script support using lora adapters after fine tuning?
Hi @bdytx5 thanks for the issue! If you fine-tune the model in torchtune, then prior to saving the final checkpoint torchtune will merge the adapter weights back with the original model weights for more efficient inference. So then you can just pass the final checkpoint directly to the generation script and it'll already have the adapter weights in there. If it helps you can also check our end-to-end tutorial which goes into this a bit.
Thanks!
Closing this issue. @bdytx5 please feel free to open if this didn't help.