llama.cpp
llama.cpp copied to clipboard
add support of codeqwen due to tokenizer
we just released codeqwen1.5, see blog and model for more info.
since for codeqwen1.5, we use a different tokenizer, which is based on sentencepiece, we need to make some changes for the convert-hf-to-gguf.py for the adaptation.
@ggerganov feel free to take a look at this small code change 😃