MentalLLaMA icon indicating copy to clipboard operation
MentalLLaMA copied to clipboard

Training data

Open NtaylorOX opened this issue 1 year ago • 6 comments

Great work and repo.

Whilst I'm aware the actual training likely follows general LLM training scripts/flow. It would be nice to see the training scripts. Is there any plan to upload?

NtaylorOX avatar Oct 31 '23 11:10 NtaylorOX

Thank you very much for your interest. We mostly modified the training architecture of FastChat for current released parts (mostly SFT) of MentaLLaMA, so I'll point you to their repo for now. But we are working towards further enhancing MentaLLaMA with other techniques such as RLHF, and we will release these codes. Stay tuned!

SteveKGYang avatar Oct 31 '23 11:10 SteveKGYang

Thanks for the reply. Very helpful, and looking forward to what is to come.

NtaylorOX avatar Oct 31 '23 13:10 NtaylorOX

Actually, one quick question. To perform the SFT for MentalLLaMA, with the instruction training data for say, the DR task. Do you treat this as a standard auto-regressive objective and combine the "query" and the "gpt-3.5-turbo" response? Just hoping to play around with training some smaller models/architectures myself to have a play

NtaylorOX avatar Nov 01 '23 16:11 NtaylorOX

Yes. This is the standard instruction tuning paradigm. I suggest you base on foundation models with SFT/RLHF (e.g. LLaMA2-chat, Vicuna) as they will facilitate your training process, especially with small training datasets.

SteveKGYang avatar Nov 01 '23 16:11 SteveKGYang

Thought so, was just double checking. Thanks for prompt reply! I'll keep you posted if I develop anything that could be brought into this repo.

NtaylorOX avatar Nov 01 '23 17:11 NtaylorOX

Thanks! Any contributions will be appreciated!

SteveKGYang avatar Nov 02 '23 08:11 SteveKGYang