alignment-handbook icon indicating copy to clipboard operation
alignment-handbook copied to clipboard

Question on "mlm" in continued pre-training

Open tanliboy opened this issue 1 year ago • 2 comments
trafficstars

Hi Team,

It is amazing handbook. In the continued pre-training script (run_cpt.py), I saw that it is not using "mlm" (Masked Language Model) parameter in the training process. I though that the training method mlm vs. forward prediction is the major differentiation between pre-training and supervised fine-tuning.

  • Has there been an assessment of the efficacy of continued pre-training with "mlm" compared to without it?
  • What's your advice or guidelines for incorporating "mlm" into the continued pre-training process?

Thanks! Li

tanliboy avatar Jun 03 '24 18:06 tanliboy

all models are trained with causal language modeling. MLM is out of the scope I think for this project.

xiyang-aads-lilly avatar Jun 18 '24 02:06 xiyang-aads-lilly

Thanks for your reply, @xiyang-aads-lilly !

In the case that we need to fine-tune on a small set of documents (<50M tokens), what would be the best strategy to integrate the knowledge into the LLMs without causing significant regressions on LLMs?

I have heard discussions between re-warming + re-sampling for continued pre-training vs. generating conversational data for instruction fine-tuning. Given we use SFT for both continued pre-training and instruction fine-tuning (assuming not using completion-only data loader), it seems that it is unnecessary to generate conversational data for instruction fine-tuning. Thoughts?

tanliboy avatar Jun 19 '24 03:06 tanliboy