AQLM
AQLM copied to clipboard
Global finetuning?
How does your updated fine tuning method work vs the one in your arxiv?
Hi, @tsengalb99 We have re-run the fine-tuning following mostly the QuIP# fine-tuning protocol from your arxiv paper. Specifically, we split the calibration data into train and val-set and perform block-finetuning using early stopping instead of rate of training loss change. But the main improvement came from end2end finetuning, where we cache the logits of the dense model and finetune the model with kl_divergence between the logits of quantized model and the original one. We also split the data into train/val set and perform an early stopping once the validation loss starts to increase.
We will provide the implementation of the finetuning code soon.
Cool, good to hear that our fine-tuning works for AQLM too. I also observed that the e2e fine-tuning can do most of what the blockwise fine-tuning does, which is good b/c the blockwise fine-tuning is more expensive than e2e.
Get Outlook for Androidhttps://aka.ms/AAb9ysg
From: Denis Kuznedelev @.> Sent: Thursday, February 29, 2024 2:56:43 AM To: Vahe1994/AQLM @.> Cc: Albert Tseng @.>; Mention @.> Subject: Re: [Vahe1994/AQLM] Global finetuning? (Issue #30)
Hi, @tsengalb99https://github.com/tsengalb99 We have re-run the fine-tuning following mostly the QuIP# fine-tuning protocol from your arxiv paper. Specifically, we split the calibration data into train and val-set and perform block-finetuning using early stopping instead of rate of training loss change. But the main improvement came from end2end finetuning, where we cache the logits of the dense model and finetune the model with kl_divergence between the logits of quantized model and the original one. We also split the data into train/val set and perform an early stopping once the validation loss starts to increase. We will provide the implementation of the finetuning code soon.
— Reply to this email directly, view it on GitHubhttps://github.com/Vahe1994/AQLM/issues/30#issuecomment-1970597766, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AH6WZSCCH37ERMQMXTRAAK3YV3PLXAVCNFSM6AAAAABD7GEUYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZQGU4TONZWGY. You are receiving this because you were mentioned.Message ID: @.***>
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.