LocalAI
LocalAI copied to clipboard
[bug] docker image show can't find llama backend
LocalAI version:
v2.21.0-cublas-cuda12-ffmpeg-core(❌), v2.21.1-cublas-cuda12-core(❌), v2.19.4-cublas-cuda12-ffmpeg-core(✅) Environment, CPU architecture, OS, and Version:
Ubuntu 24.04 LTS (GNU/Linux 6.8.0-45-generic x86_64)
Describe the bug
When I use the latest two versions of the image, it prompts that the llama backend cannot be found, but it was normal in the previous versions.
To Reproduce
Pull the new image and then up. An error was found during the request; the model's YAML configuration specifies the backend as llama.
Expected behavior Like normal reasoning in version 2.19.
Logs no other
Additional context If there is any other information needed, please let me know.
Same here, but i found a temporary solution:
docker exec -it <your container id> /bin/bash
cp /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-grpc /tmp/localai/backend_data/backend-assets/grpc/llama-cpp
You can also change your YAML files to just do the following:
backend: llama-cpp-grpc
good catch, this should be fixed by https://github.com/mudler/LocalAI/pull/3789