alignment-handbook
alignment-handbook copied to clipboard
Question on "mlm" in continued pre-training
Hi Team,
It is amazing handbook. In the continued pre-training script (run_cpt.py), I saw that it is not using "mlm" (Masked Language Model) parameter in the training process. I though that the training method mlm vs. forward prediction is the major differentiation between pre-training and supervised fine-tuning.
- Has there been an assessment of the efficacy of continued pre-training with "mlm" compared to without it?
- What's your advice or guidelines for incorporating "mlm" into the continued pre-training process?
Thanks! Li
all models are trained with causal language modeling. MLM is out of the scope I think for this project.
Thanks for your reply, @xiyang-aads-lilly !
In the case that we need to fine-tune on a small set of documents (<50M tokens), what would be the best strategy to integrate the knowledge into the LLMs without causing significant regressions on LLMs?
I have heard discussions between re-warming + re-sampling for continued pre-training vs. generating conversational data for instruction fine-tuning. Given we use SFT for both continued pre-training and instruction fine-tuning (assuming not using completion-only data loader), it seems that it is unnecessary to generate conversational data for instruction fine-tuning. Thoughts?