MentalLLaMA
MentalLLaMA copied to clipboard
Training data
Great work and repo.
Whilst I'm aware the actual training likely follows general LLM training scripts/flow. It would be nice to see the training scripts. Is there any plan to upload?
Thank you very much for your interest. We mostly modified the training architecture of FastChat for current released parts (mostly SFT) of MentaLLaMA, so I'll point you to their repo for now. But we are working towards further enhancing MentaLLaMA with other techniques such as RLHF, and we will release these codes. Stay tuned!
Thanks for the reply. Very helpful, and looking forward to what is to come.
Actually, one quick question. To perform the SFT for MentalLLaMA, with the instruction training data for say, the DR task. Do you treat this as a standard auto-regressive objective and combine the "query" and the "gpt-3.5-turbo" response? Just hoping to play around with training some smaller models/architectures myself to have a play
Yes. This is the standard instruction tuning paradigm. I suggest you base on foundation models with SFT/RLHF (e.g. LLaMA2-chat, Vicuna) as they will facilitate your training process, especially with small training datasets.
Thought so, was just double checking. Thanks for prompt reply! I'll keep you posted if I develop anything that could be brought into this repo.
Thanks! Any contributions will be appreciated!