LightRAG icon indicating copy to clipboard operation
LightRAG copied to clipboard

[Bug]:

Open Hammton opened this issue 6 months ago • 5 comments

Do you need to file an issue?

  • [x] I have searched the existing issues and this bug is not already filed.
  • [ ] I believe this is a legitimate bug, not just a question or feature request.

Describe the bug

Processing 1 document(s) Extracting stage 1/1: SAMA-IT_Governance_Framework.pdf Processing d-id: doc-63dd1ccac08c944f8d943ba5686d7298 Traceback (most recent call last): File "/app/lightrag/lightrag.py", line 1592, in process_document await asyncio.gather(*first_stage_tasks) File "/app/lightrag/kg/nano_vector_db_impl.py", line 119, in upsert embeddings_list = await asyncio.gather(*embedding_tasks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lightrag/utils.py", line 835, in wait_func return await future ^^^^^^^^^^^^ File "/app/lightrag/utils.py", line 539, in worker result = await asyncio.wait_for( ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/tasks.py", line 520, in wait_for return await fut ^^^^^^^^^ File "/app/lightrag/utils.py", line 346, in call return await self.func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lightrag/api/lightrag_server.py", line 452, in optimized_embedding_function return await ollama_embed( ^^^^^^^^^^^^^^^^^^^ File "/app/lightrag/llm/ollama.py", line 178, in ollama_embed raise e File "/app/lightrag/llm/ollama.py", line 165, in ollama_embed data = await ollama_client.embed( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.local/lib/python3.12/site-packages/ollama/_client.py", line 979, in embed return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "/root/.local/lib/python3.12/site-packages/ollama/_client.py", line 751, in _request return cls((await self._request_raw(*args, **kwargs)).json()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.local/lib/python3.12/site-packages/ollama/_client.py", line 697, in _request_raw raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download

Get this bug despite having using Azure Open API Credential

Image

Steps to reproduce

No response

Expected Behavior

No response

LightRAG Config Used

Paste your config here

Logs and screenshots

No response

Additional Information

  • LightRAG Version:
  • Operating System:
  • Python Version:
  • Related Issues:

Hammton avatar Sep 30 '25 13:09 Hammton

You are missing these

Image

kevinnkansah avatar Sep 30 '25 15:09 kevinnkansah

You should create your own .env file and config LLM and Embedding properly before running LightRAG Server/

danielaskdd avatar Sep 30 '25 17:09 danielaskdd

@kevinnkansah is this not for OpenAPI and what will be the API key and Binding host will it be the Azure Open API and endpoint LLM_BINDING=openai LLM_MODEL=gpt-4o LLM_BINDING_HOST=https://api.openai.com/v1 LLM_BINDING_API_KEY=your_api_key

Hammton avatar Sep 30 '25 19:09 Hammton

Sample env config for azure

### LLM request timeout setting for all llm (0 means no timeout for Ollma)
# LLM_TIMEOUT=180

LLM_BINDING=azure_openai
LLM_MODEL=gpt-4o
LLM_BINDING_HOST=azure_endpoint
LLM_BINDING_API_KEY=your_api_key

### Optional for Azure
AZURE_OPENAI_API_VERSION=2024-08-01-preview
AZURE_OPENAI_DEPLOYMENT=gpt-4o

danielaskdd avatar Sep 30 '25 20:09 danielaskdd

@kevinnkansah is this not for OpenAPI and what will be the API key and Binding host will it be the Azure Open API and endpoint LLM_BINDING=openai LLM_MODEL=gpt-4o LLM_BINDING_HOST=https://api.openai.com/v1 LLM_BINDING_API_KEY=your_api_key

You must use that for any LLM (e.g., Groq, Gemini, OpenAI, OpenRouter, etc.). As long as it is an OpenAI-compatible endpoint, it will work.

kevinnkansah avatar Sep 30 '25 21:09 kevinnkansah