Composable SFT
https://arxiv.org/pdf/2110.07560.pdf <-- Paper https://github.com/cambridgeltl/composable-sft <-- code
TODOs:
- Determine hyperparameters we should use for comparable testing. This will mean, likely, x train steps + 50k rewinded steps with one iteration in their method. Or maybe 5 iterations + 10k train steps twice? Idk yet
- Add loading an SFT from path (NOT MAIN PRIORITY)
If we want to train both adapters and Composable SFT at once, this will require some extra code. Probably not TOO bad, but would need extra testing to account for freezing all correct parameters
Issue #23 is addressed by this.
EDITED:
Here for reference are loss curves (Sparse FT) compared to adapter test run! Loss curves look great. I am curious to see whether the less steep eval loss will imply lower downstream performance :)
06/27/2022 21:55:58 - INFO - __main__ - adapter elements: 3697664
06/27/2022 21:55:58 - INFO - __main__ - K value for SFT is 3670016.0
My calc for # tunable params is slightly off still. I'm missing something minor. (In file, I estimate # params for the model using pfeiffer+inv with adapter_reduction_factor.)
Yong, did you run calcs to find # of tunable adapter params, or just get that number by adding an adapter to the model?
@yongzx This is ready to merge, the only thing needing changing is the calc for # of parameters to finetune.
Yong, did you run calcs to find # of tunable adapter params, or just get that number by adding an adapter to the model?
the only thing needing changing is the calc for # of parameters to fine-tune.
I can help do this, no worries! It's just a running sum of trainable parameters. I will review the code over the weekend.
I just did a quick read of the commit. It seems like for SFT, we don't need to modify anything in the adapter-transformers?
Yep! Just
git clone https://github.com/cambridgeltl/composable-sft.git
cd composable-sft
pip install -e .
to install their code. Thanks!
Also
I can help do this, no worries! It's just a running sum of trainable parameters. I will review the code over the weekend.
what I meant by this was to set the number of parameters this method changes (it's fully configurable) such that that total was equivalent to using pfeiffer+inv adapters, not just counting # trainable params with this method.
I will prioritize evaluation test suites over this for now, but I hope to finish reviewing this before our meeting this Friday.
No problem!
For reference,
git clone https://github.com/haileyschoelkopf/composable-sft.git
cd composable-sft
pip install -e .
Now to install the dependency, do this instead. I'm hoping to add a Random and FISH masking strategy in this.
Refering to MLM training scripts, the training steps for both full-model and sparse finetuning seem to be equal. Since we are comparing sparse-finetuning to other adapters methods, we need to set both to be 25K steps.
558f674 now supports composable SFT. Hailey, do you want to test it out?
Yes, let me try running this with those parameters!