LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

failed to load model with internal loader

Open FrankLIKE opened this issue 9 months ago • 1 comments

LocalAI version: latest-aio-gpu-nvidia-cuda-12

Environment, GPU architecture, OS, and Version: Docker,GPU,Windows 11

Describe the bug Unable to load model: cross-encoder

To Reproduce install

Expected behavior it can work,and talk back to me

Logs

2:45AM INF Success ip=172.17.0.1 latency="187.603µs" method=GET status=200 url=/static/assets/UcCO3FwrK3iLTeHuS_fvQtMwCp50KnMw2boKoduKmMEVuGKYMZg.ttf 2:45AM INF Success ip=172.17.0.1 latency="29.522µs" method=GET status=200 url=/static/assets/tw-elements.js 2:45AM INF Success ip=172.17.0.1 latency=95.525846ms method=POST status=200 url=/browse/search/models 2:45AM INF Success ip=127.0.0.1 latency="58.697µs" method=GET status=200 url=/readyz 2:46AM INF BackendLoader starting backend=rerankers modelID=cross-encoder o.model=cross-encoder 2:46AM INF Success ip=127.0.0.1 latency="30.471µs" method=GET status=200 url=/readyz 2:46AM ERR Server error error="failed to load model with internal loader: could not load model (no success): Unexpected err=OSError("We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mixedbread-ai/mxbai-rerank-base-v1 is not the path to a directory containing a file named config.json.\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'."), type(err)=<class 'OSError'>" ip=172.17.0.1 latency=22.118975703s method=POST status=500 url=/v1/rerank 2:47AM INF Success ip=127.0.0.1 latency="32.108µs" method=GET status=200 url=/readyz

FrankLIKE avatar Mar 21 '25 02:03 FrankLIKE

I'm having this same issue... All my models work fine on KoboldCPP, Ollama, etc., but not LocalAI.

F1zzyD avatar Apr 03 '25 16:04 F1zzyD

Same issue with quite a few models pulled from ollama:// -

12:50AM INF Trying to load the model 'A Model' with the backend '[llama-cpp llama-cpp-fallback bark-cpp piper silero-vad stablediffusion-ggml whisper huggingface /build/backend/python/rerankers/run.sh /build/backend/python/faster-whisper/run.sh /build/backend/python/coqui/run.sh /build/backend/python/transformers/run.sh /build/backend/python/diffusers/run.sh /build/backend/python/kokoro/run.sh /build/backend/python/vllm/run.sh /build/backend/python/bark/run.sh /build/backend/python/exllama2/run.sh]'
12:50AM INF [llama-cpp] Attempting to load
12:50AM INF BackendLoader starting backend=llama-cpp modelID="A Model" o.model=command-a
12:50AM INF [llama-cpp] attempting to load with AVX512 variant
12:50AM ERR [llama-cpp] Failed loading model, trying with fallback 'llama-cpp-fallback', error: failed to load model with internal loader: could not load model: rpc error: code = Canceled desc = 
12:50AM INF [llama-cpp] Fails: failed to load model with internal loader: could not load model: rpc error: code = Canceled desc = 
12:50AM INF [llama-cpp-fallback] Attempting to load
12:50AM INF BackendLoader starting backend=llama-cpp-fallback modelID="A Model" o.model=command-a
12:50AM INF [llama-cpp-fallback] Fails: failed to load model with internal loader: could not load model: rpc error: code = Canceled desc = 
12:50AM INF [bark-cpp] Attempting to load
12:50AM INF BackendLoader starting backend=bark-cpp modelID="A Model" o.model=command-a
12:50AM INF [bark-cpp] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = inference failed
12:50AM INF [piper] Attempting to load
12:50AM INF BackendLoader starting backend=piper modelID="A Model" o.model=command-a
12:50AM INF [piper] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = unsupported model type /models/command-a (should end with .onnx)
12:50AM INF [silero-vad] Attempting to load
12:50AM INF BackendLoader starting backend=silero-vad modelID="A Model" o.model=command-a
12:50AM INF [silero-vad] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = create silero detector: failed to create session: Load model from /models/command-a failed:Protobuf parsing failed.
12:50AM INF [stablediffusion-ggml] Attempting to load
12:50AM INF BackendLoader starting backend=stablediffusion-ggml modelID="A Model" o.model=command-a
12:50AM INF [stablediffusion-ggml] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = could not load model
12:50AM INF [whisper] Attempting to load
12:50AM INF BackendLoader starting backend=whisper modelID="A Model" o.model=command-a
12:50AM INF [whisper] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = unable to load model
12:50AM INF [huggingface] Attempting to load
12:50AM INF BackendLoader starting backend=huggingface modelID="A Model" o.model=command-a
12:50AM INF [huggingface] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unknown desc = no huggingface token provided
12:50AM INF [/build/backend/python/rerankers/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/rerankers/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/rerankers/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh
12:50AM INF [/build/backend/python/faster-whisper/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/faster-whisper/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/faster-whisper/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/faster-whisper/run.sh
12:50AM INF [/build/backend/python/coqui/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/coqui/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/coqui/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh
12:50AM INF [/build/backend/python/transformers/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/transformers/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/transformers/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh
12:50AM INF [/build/backend/python/diffusers/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/diffusers/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/diffusers/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh
12:50AM INF [/build/backend/python/kokoro/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/kokoro/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/kokoro/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/kokoro/run.sh
12:50AM INF [/build/backend/python/vllm/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/vllm/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/vllm/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh
12:50AM INF [/build/backend/python/bark/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/bark/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/bark/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh
12:50AM INF [/build/backend/python/exllama2/run.sh] Attempting to load
12:50AM INF BackendLoader starting backend=/build/backend/python/exllama2/run.sh modelID="A Model" o.model=command-a
12:50AM INF [/build/backend/python/exllama2/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh
12:50AM INF Success ip=127.0.0.1 latency="34.507µs" method=GET status=200 url=/readyz

still happening on 0.6.9

sempervictus avatar May 18 '25 00:05 sempervictus

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Aug 16 '25 02:08 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Aug 26 '25 02:08 github-actions[bot]

I encountered the same issue.

7:46AM ERR Failed to load model deepseek-ai_deepseek-r1-0528-qwen3-8b with backend cpu-llama-cpp error="failed to load model with internal loader: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF" modelID=deepseek-ai_deepseek-r1-0528-qwen3-8b

secnotes avatar Nov 26 '25 07:11 secnotes