Hao Zhang
Hao Zhang
Closing this is not related to any feature developed by this repo.
@sahalshajim Yes you're right. Thanks!
@78 There is nothing special. We just happened to use the upper case during our training. I guess in your fine-tuning job you can try to use a different prefix....
Please try our latest Vicuna-13B-v1.3 or LongChat. The issue is stale, so closing.
Seem like a bug with CPU support? CC @merrymercy
closing. Unfortunately, we do not provide a specialization service at this moment.
@unmorrill0 8GB is very challenging, as the model itself in 8bit needs 7G to store. Could you increase your swap space? But be mindful that once swapping is triggered the...
@theslugger Yes you can. You can either start from the LLama 13b or Vicuna 13B. There indeed is a difference on which you start fine-tuning. I believe starting from Vicuna...
@suquark @infwinston
Closing as it is resolved. The issue is that the tokenizers of Llama were updated by HF. We made a few refactors to make it right. As long as you...