llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Added --chat-template-file to llama-run

Open engelmi opened this issue 5 days ago • 1 comments

Relates to: https://github.com/ggml-org/llama.cpp/issues/11178

Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model.

This also enables running the granite-code model from ollama:

# using a jinja chat template file 
# (when prefix, e.g. hf://, is not specified, llama-run pulls from ollama)
$ llama-run  --chat-template-file ./chat.tmpl granite-code
> write code

Here is a code snippet in Python:

"""
def f(x):
    return x**2
"""

# without a jinja chat template file
$ llama-run granite-code
> write code
failed to apply the chat template

Make sure to read the contributing guidelines before submitting a PR

engelmi avatar Feb 17 '25 08:02 engelmi