Template errors in AIO CUDA 12 8GB
I am using localai/localai:latest-aio-gpu-nvidia-cuda-12 with 8GB VRAM
Using {{.Input}} in the chat_message template causes template execution errors because ChatMessageTemplateData doesn't have an .Input field.
Log:
template: :1:2: executing "" at <.Input>: can't evaluate field Input in type templates.ChatMessageTemplateData
Affected files: /aio/gpu-8g/vision.yaml - Vision model configuration /aio/gpu-8g/text-to-text.yaml - Text model configuration
{{.Input}} also appears in other yaml files, but I did not follow up on these.
Correct Configuration:
chat_message: |
<|im_start|>{{ .RoleName }}
{{.Content}}<|im_end|>
Suggested Fix Review and fix example configurations using {{.Input}} in chat_message
I also noticed that the web documentation about the real models in "Run with container images" does not match the actual selection in AIO (e.g. vision now uses minicpm and not llava-1.6-mistral).
After some testing I noticed that there are other problems with the templates such as missing stop sequences. I guess I must be the only one still using LocalAI AIO with an 8GB GPU as there seem to be zero issues reported with this config.