llama.cpp
llama.cpp copied to clipboard
Update the convert-gptq-to-ggml.py with the new tokenizer output
Apply the changes from #252 to convert-gptq-to-ggml.py
For more info about what this script does, see #301