LightRAG icon indicating copy to clipboard operation
LightRAG copied to clipboard

[Bug]: ModuleNotFoundError: No module named 'pyuca'

Open ndrewpj opened this issue 7 months ago • 4 comments

Do you need to file an issue?

  • [x] I have searched the existing issues and this bug is not already filed.
  • [x] I believe this is a legitimate bug, not just a question or feature request.

Describe the bug

I got this error when starting the server with lightrag-gunicorn --workers 1 :

 File "/home/andrey/lightrag/lightrag/api/routers/document_routes.py", line 6, in <module>
    from pyuca import Collator
ModuleNotFoundError: No module named 'pyuca'

The module is installed with pip but I got the error.

Steps to reproduce

  1. Get the latest release with git clone / git pull
  2. Try to run the server

Expected Behavior

No response

LightRAG Config Used

Paste your config here

This is sample file of .env

Server Configuration

HOST=0.0.0.0

PORT=9621

WORKERS=2

CORS_ORIGINS=http://localhost:3000,http://localhost:8080

WEBUI_TITLE='Graph RAG Engine' WEBUI_DESCRIPTION="Simple and Fast Graph Based RAG System"

Optional SSL Configuration

SSL=true

SSL_CERTFILE=/path/to/cert.pem

SSL_KEYFILE=/path/to/key.pem

Directory Configuration (defaults to current working directory)

WORKING_DIR=<absolute_path_for_working_dir>

INPUT_DIR=<absolute_path_for_doc_input_dir>

Ollama Emulating Model Tag

OLLAMA_EMULATING_MODEL_TAG=latest

Max nodes return from grap retrieval

MAX_GRAPH_NODES=1000

Logging level

LOG_LEVEL=INFO

VERBOSE=False

LOG_MAX_BYTES=10485760

LOG_BACKUP_COUNT=5

Logfile location (defaults to current working directory)

LOG_DIR=/path/to/log/directory

Settings for RAG query

HISTORY_TURNS=3 COSINE_THRESHOLD=0.2 TOP_K=60 MAX_TOKEN_TEXT_CHUNK=2000 MAX_TOKEN_RELATION_DESC=2000 MAX_TOKEN_ENTITY_DESC=2000

Settings for document indexing

SUMMARY_LANGUAGE=English CHUNK_SIZE=200 CHUNK_OVERLAP_SIZE=20

Number of parallel processing documents in one patch

MAX_PARALLEL_INSERT=1

Max tokens for entity/relations description after merge

MAX_TOKEN_SUMMARY=500

Number of entities/edges to trigger LLM re-summary on merge ( at least 3 is recommented)

FORCE_LLM_SUMMARY_ON_MERGE=5

Num of chunks send to Embedding in single request

EMBEDDING_BATCH_NUM=8

Max concurrency requests for Embedding

#EMBEDDING_FUNC_MAX_ASYNC=2 MAX_EMBED_TOKENS=8192

LLM Configuration

Time out in seconds for LLM, None for infinite timeout

TIMEOUT=150

Some models like o1-mini require temperature to be set to 1

TEMPERATURE=0.01

Max concurrency requests of LLM

MAX_ASYNC=1

Max tokens send to LLM (less than context size of the model)

MAX_TOKENS=1024 ENABLE_LLM_CACHE=true ENABLE_LLM_CACHE_FOR_EXTRACT=true

Ollama example (For local services installed with docker, you can use host.docker.internal as host)

LLM_BINDING=ollama LLM_MODEL=qwen3:8b LLM_BINDING_API_KEY= LLM_BINDING_HOST=http://localhost:11434

OpenAI alike example

LLM_BINDING=openai

LLM_MODEL=gpt-4o

LLM_BINDING_HOST=https://api.openai.com/v1

LLM_BINDING_API_KEY=your_api_key

lollms example

LLM_BINDING=lollms

LLM_MODEL=mistral-nemo:latest

LLM_BINDING_HOST=http://localhost:9600

LLM_BINDING_API_KEY=your_api_key

Embedding Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)

EMBEDDING_MODEL=Definity/snowflake-arctic-embed-l-v2.0-q8_0:latest EMBEDDING_DIM=1024

EMBEDDING_BINDING_API_KEY=your_api_key

ollama example

EMBEDDING_BINDING=ollama EMBEDDING_BINDING_HOST=http://localhost:11434

OpenAI alike example

EMBEDDING_BINDING=openai

LLM_BINDING_HOST=https://api.openai.com/v1

Lollms example

EMBEDDING_BINDING=lollms

EMBEDDING_BINDING_HOST=http://localhost:9600

Optional for Azure (LLM_BINDING_HOST, LLM_BINDING_API_KEY take priority)

AZURE_OPENAI_API_VERSION=2024-08-01-preview

AZURE_OPENAI_DEPLOYMENT=gpt-4o

AZURE_OPENAI_API_KEY=your_api_key

Logs and screenshots

Image

Additional Information

  • LightRAG Version: 1.3.6
  • Operating System: Ubuntu 24.04 LTS
  • Python Version: 3.12
  • Related Issues:

ndrewpj avatar May 05 '25 22:05 ndrewpj

Try to install pyuca manually to see what happen.

danielaskdd avatar May 05 '25 23:05 danielaskdd

Try to install pyuca manually to see what happen.

what do you mean by manually? I wrote that I used 'pip install pyuca'

ndrewpj avatar May 06 '25 07:05 ndrewpj

After cloning the repository, please install the LightRAG server and verify that the installation completes without errors by running the following command:

pip install -e ".[api]"

danielaskdd avatar May 06 '25 08:05 danielaskdd

After cloning the repository, please install the LightRAG server and verify that the installation completes without errors by running the following command:

pip install -e ".[api]"

that did not help:

Starting Gunicorn with direct Python API... Traceback (most recent call last): File "/home/andrey/.local/bin/lightrag-gunicorn", line 8, in sys.exit(main()) ^^^^^^ File "/home/andrey/lightrag/lightrag/api/run_with_gunicorn.py", line 223, in main app.run() File "/home/andrey/.local/share/pipx/venvs/lightrag-hku/lib/python3.12/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() ^^^^^^^^^^^^^ File "/home/andrey/.local/share/pipx/venvs/lightrag-hku/lib/python3.12/site-packages/gunicorn/arbiter.py", line 57, in init self.setup(app) File "/home/andrey/.local/share/pipx/venvs/lightrag-hku/lib/python3.12/site-packages/gunicorn/arbiter.py", line 117, in setup self.app.wsgi() File "/home/andrey/.local/share/pipx/venvs/lightrag-hku/lib/python3.12/site-packages/gunicorn/app/base.py", line 66, in wsgi self.callable = self.load() ^^^^^^^^^^^ File "/home/andrey/lightrag/lightrag/api/run_with_gunicorn.py", line 205, in load from lightrag.api.lightrag_server import get_application File "/home/andrey/lightrag/lightrag/api/lightrag_server.py", line 41, in from lightrag.api.routers.document_routes import ( File "/home/andrey/lightrag/lightrag/api/routers/init.py", line 5, in from .document_routes import router as document_router File "/home/andrey/lightrag/lightrag/api/routers/document_routes.py", line 6, in from pyuca import Collator ModuleNotFoundError: No module named 'pyuca'

ndrewpj avatar May 24 '25 20:05 ndrewpj

The latest version of LightRAG has transitioned to pyproject.toml for dependency management, allowing project installation via uv. Please verify if the issue is resolved with the latest version.

danielaskdd avatar Jul 20 '25 11:07 danielaskdd

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Oct 19 '25 22:10 github-actions[bot]

This issue has been automatically closed because it has not had recent activity. Please open a new issue if you still have this problem.

github-actions[bot] avatar Nov 01 '25 22:11 github-actions[bot]