WizardLM
WizardLM copied to clipboard
new format, new finetune code?
Llamax code it knows how to handle alpaca formatted QA data, but I didnt' see anything in there to handle ShareGPT format data,
How do I finetune with the new format? Your finetune guide (https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/README.md#fine-tuning) still references the 70k dataset
Your finetune guide (https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/README.md#fine-tuning) still references the 70k dataset
Yep, noticed this too. Perhaps @nlpxucan forgot to update that particular section of the readme during the last commit (2 days ago).
perhaps this
https://github.com/nlpxucan/WizardLM/blob/94f9c792df4b91589c8c236a566ddc63d4868ec2/WizardLM/src/train_freeform.py#LL50C5-L50C17
Or maybe they used FastChat rather than Llamax.
gonna presume we are using FastChat until I hear otherwise.