WizardLM icon indicating copy to clipboard operation
WizardLM copied to clipboard

Expirement with using RepSet of 196k for EvolInstruct 1k

Open walking-octopus opened this issue 1 year ago • 1 comments

The new WizardLM 13B v1.1 was fine-tuned with a 1k instruct dataset, similar to the LIMA paper.

I wonder if making the 1k dataset more representative of the initial 100k distribution can boost performance on some tasks.

Google had an interesting paper, Extracting representative subset from extensive text data for training pre-trained language models, which they tried to apply to Colossal Cleaned Crawled Corpus (C4) to see if it improved performance of LLMs pretrained on less tokens, which it did.

Perhaps this can be of use for diverse instruction alignment too?

walking-octopus avatar Jul 12 '23 13:07 walking-octopus

Thank you for your suggestions. We will read this paper.

ChiYeungLaw avatar Jul 14 '23 05:07 ChiYeungLaw