intfloat/multilingual-e5-base can't load
Hi,
I'm trying to use the intfloat/multilingual-e5-base model but it can't load and the logs show me the following error.
Help would be greatly appreciated as I am in dire need of using multi-language embeddings. I am using Docker on windows and downloaded images from the hub today.
Thank you!
ERR Server error error="could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /build/models/intfloat/multilingual-e5-base (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: grpc service not ready\n[stablediffusion]: grpc service not ready\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS"~~~~
Hi dobry,
this log usually means a wrongly compile model configuration. Can you post you yaml file?
Thank you
Hi fakezeta,
Thank you so much for your reply! I am sorry but I'm fairly new when it comes to technical issues and AI and don't know where to find this file. I pulled images using the command below and did not clone the repository form Github on my computer. Therefore, if you could let me know where I can find this file, I would greatly appreciate it.
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
Thank you
I'm sorry but I'm still confused.
Let me try to clarify.
You are using a AIO (All In One) build, that already come preconfigured with some models like described here.
So the model all-MiniLM-L6-v2 should be already availabel using the model name text-embedding-ada-002.
You want to try intfloat/multilingual-e5-base for it's multilingual capabilities. To be able to use it you must folow the steps described in the Install and Run Models.
FWIW I'm using intfloat/multilingual-e5-base using an openvino converted model (spoiler: it has been converted by me).
You can find it in the model gallery if interested.
Let me know if this put you in the right direction.
Hi fakezeta,
I apologize for not being very precise, but as I indicated above I am just beginning to gather my experience.
I want to use open source multi-language embeddings for Anything LLM.
- I run Local AI on Docker on windows using the command docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
- I was able to connect Anything LLM and Local AI through the API.
- I downloaded openvino-multilingual-e5-base from the Gallery by clicking Install.
- However, when I select this model in Anything LLM and try to check if the embeddings are working, the following error occurs:
12:52PM INF Loading model 'intfloat/multilingual-e5-base' with backend transformers 12:52PM ERR Server error error="could not load model (no success): Unexpected err=ModuleNotFoundError("No module named 'optimum'"), type(err)=<class 'ModuleNotFoundError'>" ip=172.17.0.1 latency=4.021711669s method=POST status=500 url=/v1/embeddings 12:52PM INF Loading model 'intfloat/multilingual-e5-base' with backend transformers 12:52PM INF Success ip=127.0.0.1 latency="42.502µs" method=GET status=200 url=/readyz 12:52PM ERR Server error error="could not load model (no success): Unexpected err=ModuleNotFoundError("No module named 'optimum'"), type(err)=<class 'ModuleNotFoundError'>" ip=172.17.0.1 latency=4.023253807s method=POST status=500 url=/v1/embeddings 12:52PM INF Loading model 'intfloat/multilingual-e5-base' with backend transformers 12:52PM ERR Server error error="could not load model (no success): Unexpected err=ModuleNotFoundError("No module named 'optimum'"), type(err)=<class 'ModuleNotFoundError'>" ip=172.17.0.1 latency=4.018865877s method=POST status=500 url=/v1/embeddings
- I have checked that model text-embedding-ada-002 is working and here everything is fine. I thought that maybe specifying the name of the unconverted model <intfloat/multilingual-e5-base> in the API would help, but this resulted in an error as above.
I know I'm probably doing something wrong, but the pre-configured emeddings model works and I'm having trouble getting the one I downloaded from the gallery to do the job. And thank you very much for your work with converting intfloat/multilingual-e5-base to Local AI!! If I can get this to work, it will be a lifesaver for me, as there are very few options to run multilingual embeddings locally.
With me, error showed:
2024-08-07 09:34:08 2:34AM ERR Server error error="could not load model (no success): Unexpected err=ModuleNotFoundError("No module named 'optimum'"), type(err)=<class 'ModuleNotFoundError'>" ip=172.18.0.1 latency=6.079192644s method=POST status=500 url=/v1/embeddings
Can confirm on 2.24.2: error="failed to load model with internal loader: could not load model (no success): Unexpected err=ModuleNotFoundError(\"No module named 'optimum'\"), type(err)=<class 'ModuleNotFoundError'>"
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.