use ollama embedding unbale to work
config below:
EMBEDDING_MODEL_NAME=bge-m3 EMBEDDING_MODEL_ENDPOINT=http://xxxxx:11434/api/embeddings EMBEDDING_MODEL_APIKEY=
试着加ollama/bge-m3 尝试 bge-m3:latest http://xxxxx:11434 http://xxxxx:11434/api
all unable to work
An error occurred during the execution of the job:
The job 10687910-c68d-49fb-955a-febd394e6b96 could not be executed due to an unexpected error during leader task decomposition. Error info: 'detail'
Check the error details in path: '/root/.chat2graph/logs/server.log'
Please check the job 10687910-c68d-49fb-955a-febd394e6b96 ("图是什么?...") for more details. Or you can re-try to send your message. open server.log
127.0.0.1 - - [17/Aug/2025 23:55:25] "[36mGET /home HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:25] "[36mGET /umi.css HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:25] "[36mGET /umi.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /layouts__index.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /919.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /475.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /p__Home__index.chunk.css HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /556.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /p__Home__index.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /403.async.js HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[36mGET /static/logo.9cd34d69.png HTTP/1.1[0m" 304 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[32mGET /api/graphdbs HTTP/1.1[0m" 308 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "[32mGET /api/sessions?page=1&size=10 HTTP/1.1[0m" 308 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "GET /api/graphdbs/ HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:26] "GET /api/sessions/?page=1&size=10 HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:27] "[32mPOST /api/sessions HTTP/1.1[0m" 308 - 127.0.0.1 - - [17/Aug/2025 23:55:29] "POST /api/sessions/ HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:29] "[32mGET /api/sessions?page=1&size=10 HTTP/1.1[0m" 308 - 127.0.0.1 - - [17/Aug/2025 23:55:29] "GET /api/sessions/?page=1&size=10 HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:29] "POST /api/sessions/a43ac97c-606a-41fe-9b8c-811c99b0d418/chat HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:32] "GET /api/jobs/10687910-c68d-49fb-955a-febd394e6b96/message HTTP/1.1" 200 - 127.0.0.1 - - [17/Aug/2025 23:55:34] "GET /api/jobs/10687910-c68d-49fb-955a-febd394e6b96/message HTTP/1.1" 200 -
Hi @watshare,
Thanks for reaching out and providing the details.
The key requirement for the embedding configuration is that the endpoint must be OpenAI-compatible. As outlined in our documentation (doc/en-us/deployment/config-env.md), the system expects the EMBEDDING_MODEL_ENDPOINT to behave exactly like OpenAI's embeddings API.
While Ollama provides an embedding endpoint, its native API (/api/embeddings) is not OpenAI-compatible by default. It uses a different request and response format. This is the likely cause of the error you're seeing.
The issue is not with the URL itself, but with the API contract. The service at EMBEDDING_MODEL_ENDPOINT must strictly adhere to the OpenAI API specification for embeddings.
To resolve this, you need to use a proxy or a compatibility layer that exposes your Ollama model through an OpenAI-compatible interface.
Correct Configuration Example (Using a Compatibility Layer)
For instance, if you use a tool like LiteLLM or another proxy to create an OpenAI-compatible wrapper for your local Ollama service, your configuration should point to the proxy's endpoint, not the direct Ollama URL.
Your .env file should look something like this:
# The model name you are serving via the proxy
EMBEDDING_MODEL_NAME=bge-m3
# The URL of the OpenAI-compatible proxy, which then calls Ollama
# This is an example URL; yours may differ based on your proxy setup.
EMBEDDING_MODEL_ENDPOINT=http://localhost:8000/v1/embeddings
# API key might be optional depending on your proxy setup
EMBEDDING_MODEL_APIKEY=
Hope this clarifies things
@Appointat 非常感谢