openchat icon indicating copy to clipboard operation
openchat copied to clipboard

About performance of LLama (llama2, llama3) model

Open huazhenliu opened this issue 1 year ago • 1 comments

Thank you for your wonderful work!

Have you ever experimented with LLama2-7B as the model to do C-RLFT? How about the performance? Because OpenChat-3.5-0106 is based on Mistral, performance is really high, I have tried using LLama2-7B, the performance is not satisfied.

Another 2 questions: can chat model be used as the model to do C-RLFT? I think, some code needs to be done, e.g., chat template, etc. How about LLama3-8B-instuct, how to easy train, any performance data?

Thanks in advance.

huazhenliu avatar Apr 25 '24 07:04 huazhenliu

Hi @huazhenliu We've tried Llama 2 13B, the performance is worse than Mistral 7B, so we've chosen Mistral 7B as the base model.

  • For your second question, it's OK. We can do C-RLFT on any model. You can edit the chat template here https://github.com/imoneoi/openchat/blob/master/ochat/config/init.py
  • We're actively working on a new version based on Llama-3-8B

imoneoi avatar Apr 26 '24 16:04 imoneoi