LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

[bug] docker image show can't find llama backend

Open Hisir0909 opened this issue 1 year ago • 2 comments

LocalAI version:

v2.21.0-cublas-cuda12-ffmpeg-core(❌), v2.21.1-cublas-cuda12-core(❌), v2.19.4-cublas-cuda12-ffmpeg-core(✅) Environment, CPU architecture, OS, and Version:

Ubuntu 24.04 LTS (GNU/Linux 6.8.0-45-generic x86_64)

image image

Describe the bug

image

When I use the latest two versions of the image, it prompts that the llama backend cannot be found, but it was normal in the previous versions.

To Reproduce

Pull the new image and then up. An error was found during the request; the model's YAML configuration specifies the backend as llama.

Expected behavior Like normal reasoning in version 2.19.

Logs no other

Additional context If there is any other information needed, please let me know.

Hisir0909 avatar Sep 26 '24 12:09 Hisir0909

Same here, but i found a temporary solution:

docker exec -it <your container id> /bin/bash

cp /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-grpc /tmp/localai/backend_data/backend-assets/grpc/llama-cpp

github0null avatar Sep 29 '24 15:09 github0null

You can also change your YAML files to just do the following:

backend: llama-cpp-grpc

TheDarkTrumpet avatar Sep 29 '24 18:09 TheDarkTrumpet

good catch, this should be fixed by https://github.com/mudler/LocalAI/pull/3789

mudler avatar Oct 11 '24 13:10 mudler