[Bug]: ModuleNotFoundError: No module named 'pyuca'
Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] I believe this is a legitimate bug, not just a question or feature request.
Describe the bug
I got this error when starting the server with lightrag-gunicorn --workers 1 :
File "/home/andrey/lightrag/lightrag/api/routers/document_routes.py", line 6, in <module>
from pyuca import Collator
ModuleNotFoundError: No module named 'pyuca'
The module is installed with pip but I got the error.
Steps to reproduce
- Get the latest release with git clone / git pull
- Try to run the server
Expected Behavior
No response
LightRAG Config Used
Paste your config here
This is sample file of .env
Server Configuration
HOST=0.0.0.0
PORT=9621
WORKERS=2
CORS_ORIGINS=http://localhost:3000,http://localhost:8080
WEBUI_TITLE='Graph RAG Engine' WEBUI_DESCRIPTION="Simple and Fast Graph Based RAG System"
Optional SSL Configuration
SSL=true
SSL_CERTFILE=/path/to/cert.pem
SSL_KEYFILE=/path/to/key.pem
Directory Configuration (defaults to current working directory)
WORKING_DIR=<absolute_path_for_working_dir>
INPUT_DIR=<absolute_path_for_doc_input_dir>
Ollama Emulating Model Tag
OLLAMA_EMULATING_MODEL_TAG=latest
Max nodes return from grap retrieval
MAX_GRAPH_NODES=1000
Logging level
LOG_LEVEL=INFO
VERBOSE=False
LOG_MAX_BYTES=10485760
LOG_BACKUP_COUNT=5
Logfile location (defaults to current working directory)
LOG_DIR=/path/to/log/directory
Settings for RAG query
HISTORY_TURNS=3 COSINE_THRESHOLD=0.2 TOP_K=60 MAX_TOKEN_TEXT_CHUNK=2000 MAX_TOKEN_RELATION_DESC=2000 MAX_TOKEN_ENTITY_DESC=2000
Settings for document indexing
SUMMARY_LANGUAGE=English CHUNK_SIZE=200 CHUNK_OVERLAP_SIZE=20
Number of parallel processing documents in one patch
MAX_PARALLEL_INSERT=1
Max tokens for entity/relations description after merge
MAX_TOKEN_SUMMARY=500
Number of entities/edges to trigger LLM re-summary on merge ( at least 3 is recommented)
FORCE_LLM_SUMMARY_ON_MERGE=5
Num of chunks send to Embedding in single request
EMBEDDING_BATCH_NUM=8
Max concurrency requests for Embedding
#EMBEDDING_FUNC_MAX_ASYNC=2 MAX_EMBED_TOKENS=8192
LLM Configuration
Time out in seconds for LLM, None for infinite timeout
TIMEOUT=150
Some models like o1-mini require temperature to be set to 1
TEMPERATURE=0.01
Max concurrency requests of LLM
MAX_ASYNC=1
Max tokens send to LLM (less than context size of the model)
MAX_TOKENS=1024 ENABLE_LLM_CACHE=true ENABLE_LLM_CACHE_FOR_EXTRACT=true
Ollama example (For local services installed with docker, you can use host.docker.internal as host)
LLM_BINDING=ollama LLM_MODEL=qwen3:8b LLM_BINDING_API_KEY= LLM_BINDING_HOST=http://localhost:11434
OpenAI alike example
LLM_BINDING=openai
LLM_MODEL=gpt-4o
LLM_BINDING_HOST=https://api.openai.com/v1
LLM_BINDING_API_KEY=your_api_key
lollms example
LLM_BINDING=lollms
LLM_MODEL=mistral-nemo:latest
LLM_BINDING_HOST=http://localhost:9600
LLM_BINDING_API_KEY=your_api_key
Embedding Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
EMBEDDING_MODEL=Definity/snowflake-arctic-embed-l-v2.0-q8_0:latest EMBEDDING_DIM=1024
EMBEDDING_BINDING_API_KEY=your_api_key
ollama example
EMBEDDING_BINDING=ollama EMBEDDING_BINDING_HOST=http://localhost:11434
OpenAI alike example
EMBEDDING_BINDING=openai
LLM_BINDING_HOST=https://api.openai.com/v1
Lollms example
EMBEDDING_BINDING=lollms
EMBEDDING_BINDING_HOST=http://localhost:9600
Optional for Azure (LLM_BINDING_HOST, LLM_BINDING_API_KEY take priority)
AZURE_OPENAI_API_VERSION=2024-08-01-preview
AZURE_OPENAI_DEPLOYMENT=gpt-4o
AZURE_OPENAI_API_KEY=your_api_key
Logs and screenshots
Additional Information
- LightRAG Version: 1.3.6
- Operating System: Ubuntu 24.04 LTS
- Python Version: 3.12
- Related Issues:
Try to install pyuca manually to see what happen.
Try to install
pyucamanually to see what happen.
what do you mean by manually? I wrote that I used 'pip install pyuca'
After cloning the repository, please install the LightRAG server and verify that the installation completes without errors by running the following command:
pip install -e ".[api]"
After cloning the repository, please install the LightRAG server and verify that the installation completes without errors by running the following command:
pip install -e ".[api]"
that did not help:
Starting Gunicorn with direct Python API...
Traceback (most recent call last):
File "/home/andrey/.local/bin/lightrag-gunicorn", line 8, in
The latest version of LightRAG has transitioned to pyproject.toml for dependency management, allowing project installation via uv. Please verify if the issue is resolved with the latest version.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please open a new issue if you still have this problem.