llama.cpp
llama.cpp copied to clipboard
Ziya support
Ziya is a large-scale pre-trained model based on LLaMA with 13 billion parameters.It add Chinese token. And model vocab size is not equal to tokenizer.
Could anyone give some hint here?
follow the readme and expand the tokenizer on your own works well for me.
See my repo on huggingface: https://huggingface.co/thatname/Ziya-LLaMA-13B-v1-ggml
problem solved,thanks