LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Template errors in AIO CUDA 12 8GB

Open logmod opened this issue 1 month ago • 1 comments

I am using localai/localai:latest-aio-gpu-nvidia-cuda-12 with 8GB VRAM

Using {{.Input}} in the chat_message template causes template execution errors because ChatMessageTemplateData doesn't have an .Input field.

Log: template: :1:2: executing "" at <.Input>: can't evaluate field Input in type templates.ChatMessageTemplateData

Affected files: /aio/gpu-8g/vision.yaml - Vision model configuration /aio/gpu-8g/text-to-text.yaml - Text model configuration

{{.Input}} also appears in other yaml files, but I did not follow up on these.

Correct Configuration:

  chat_message: |
    <|im_start|>{{ .RoleName }}
    {{.Content}}<|im_end|>

Suggested Fix Review and fix example configurations using {{.Input}} in chat_message

I also noticed that the web documentation about the real models in "Run with container images" does not match the actual selection in AIO (e.g. vision now uses minicpm and not llava-1.6-mistral).

logmod avatar Nov 08 '25 15:11 logmod

After some testing I noticed that there are other problems with the templates such as missing stop sequences. I guess I must be the only one still using LocalAI AIO with an 8GB GPU as there seem to be zero issues reported with this config.

logmod avatar Nov 10 '25 22:11 logmod