Add RAGAnything processing to LightRAG's webui
Overview
This document outlines the key differences between the current working branch and the main branch, focusing on the integration of RAGAnything functionality into the LightRAG server.
Operating Procedure (Message on September 16th, 2025)
1.Install RAG-Anything
git clone https://github.com/HKUDS/RAG-Anything.git
cd raganything
pip install -e ".[all]"
2.Install the RAGAnything branch of Lightrag
git clone -b RAGAnything https://github.com/HKUDS/LightRAG.git
cd lightrag
pip install -e ".[api]"
3.Add .env file to lightrag and start running it
cd lightrag
lightrag-server
Modified Files
lightrag/api/lightrag_server.py
New Imports
- RAGManager: Added import for
RAGManagerfromlightrag.ragmanager - RAGAnything: Added import for
RAGAnythingandRAGAnythingConfigfromraganything
Key Changes
1. Enhanced LightRAG Initialization
rag = LightRAG(
working_dir=args.working_dir,
workspace=args.workspace,
input_dir=args.input_dir, # New parameter added
# ... existing parameters
)
Change: Added input_dir parameter to the LightRAG initialization.
2. RAGAnything Configuration Setup
config = RAGAnythingConfig(
working_dir=args.working_dir or "./rag_storage",
parser="mineru", # Parser selection: mineru or docling
parse_method="auto", # Parse method: auto, ocr, or txt
enable_image_processing=True,
enable_table_processing=True,
enable_equation_processing=True,
)
Purpose: Configures RAGAnything with comprehensive document processing capabilities including:
- Parser Options: Support for
mineruordoclingparsers - Parse Methods: Automatic, OCR, or text-based parsing
- Processing Features: Image, table, and equation processing enabled
3. LLM Model Function Definition
def llm_model_func(prompt, system_prompt=None, history_messages=[], **kwargs):
return openai_complete_if_cache(
"gpt-4o-mini",
prompt,
system_prompt=system_prompt,
history_messages=history_messages,
api_key=api_key,
base_url=base_url,
**kwargs,
)
Feature: Standardized LLM interaction using GPT-4o-mini with caching support.
4. Vision Model Function for Image Processing
def vision_model_func(
prompt, system_prompt=None, history_messages=[], image_data=None, **kwargs
):
if image_data:
return openai_complete_if_cache(
"gpt-4o",
"",
system_prompt=None,
history_messages=[],
messages=[
{"role": "system", "content": system_prompt}
if system_prompt
else None,
{
"role": "user",
"content": [
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_data}"
},
},
],
}
if image_data
else {"role": "user", "content": prompt},
],
api_key=api_key,
base_url=base_url,
**kwargs,
)
else:
return llm_model_func(prompt, system_prompt, history_messages, **kwargs)
Capability: Enhanced vision processing using GPT-4o for image analysis with base64 encoding support.
5. Embedding Function Configuration
embedding_func = EmbeddingFunc(
embedding_dim=3072,
max_token_size=8192,
func=lambda texts: openai_embed(
texts,
model="text-embedding-3-large",
api_key=api_key,
base_url=base_url,
),
)
Specifications:
- Embedding Dimension: 3072
- Max Token Size: 8192
- Model: text-embedding-3-large
6. RAGAnything Initialization
rag_anything = RAGAnything(
lightrag=rag,
config=config,
llm_model_func=llm_model_func,
vision_model_func=vision_model_func,
embedding_func=embedding_func,
)
logger.info("检查raganything的parser下载情况")
rag_anything.verify_parser_installation_once()
RAGManager.set_rag(rag_anything)
Integration:
- Combines LightRAG with RAGAnything capabilities
- Verifies parser installation
- Registers with RAGManager for centralized access
7. Updated Route Creation
app.include_router(
create_document_routes(
rag,
rag_anything, # New parameter added
doc_manager,
api_key,
)
)
Enhancement: Document routes now receive both rag and rag_anything instances for comprehensive document processing.
Summary of New Capabilities
Enhanced Document Processing
- Multi-format Support: Handles various document formats through advanced parsers
- Visual Content Processing: Processes images, tables, and equations within documents
- Flexible Parsing: Supports automatic, OCR, and text-based parsing methods
Improved AI Integration
- Dual Model Support: Separate functions for text and vision processing
- Advanced Embeddings: High-dimensional embeddings for better semantic understanding
- Caching Optimization: Built-in caching for improved performance
Architecture Improvements
- Centralized Management: RAGManager provides unified access to RAG capabilities
- Modular Design: Clear separation between LightRAG and RAGAnything functionalities
- Enhanced API: Document routes now support extended processing capabilities
Configuration Requirements
Environment Variables
LLM_BINDING_API_KEY: API key for LLM servicesLLM_BINDING_HOST: Base URL for LLM services
Dependencies
raganything: New dependency for enhanced document processing- Parser dependencies (mineru/docling) for document parsing
Notes
- The integration maintains backward compatibility with existing LightRAG functionality
- New features are additive and don't break existing workflows
- Parser verification ensures proper setup before operation
@hzywhite Hi, I'm interested in using this feature. Are you ready to merge?
Hi is there any reason why this PR has not yet been merged? The tests all passed and looks fine
I found LightRAG from RAGAnything repo as Ready to use RAG solution that uses RAGAnything. I found RAGAnything while searching ready to use RAG with mineru backend for PDF processing, and very disappointing that LightRAG does not support RAGAnything out of the box. This MR is MUST HAVE for LightRAG, just add new environment variable to choose which backend to use.
ERROR: RAGAnything initialization failed: 'RAGAnything' object has no attribute 'verify_parser_installation_once'
FIX:
pip install "lightrag-hku[api] @ git+https://github.com/HKUDS/LightRAG.git@RAGAnything"
# Install upstream raganything after lightrag.
pip install "raganything[all] @ git+https://github.com/HKUDS/RAG-Anything.git"
ERROR /documents/paginated HTTP/1.1 500
INFO: 127.0.0.1:53198 - "POST /documents/paginated HTTP/1.1" 500
ERROR: Error getting paginated documents: 1 validation error for DocStatusResponse
scheme_name
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.11/v/string_type
ERROR: Traceback (most recent call last):
File "/Users/appleroot/projects/RANY/.venv/lib/python3.13/site-packages/lightrag/api/routers/document_routes.py", line 2935, in get_documents_paginated
DocStatusResponse(
~~~~~~~~~~~~~~~~~^
id=doc_id,
^^^^^^^^^^
...<11 lines>...
multimodal_content=doc.multimodal_content,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/appleroot/projects/RANY/.venv/lib/python3.13/site-packages/pydantic/main.py", line 253, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for DocStatusResponse
scheme_name
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.11/v/string_type
FIX:
# DELETE YOUR PREVIOUS RAG DATA
rm -rf inputs/ lightrag.log rag_storage/
ERROR:
IN UI:
422 Unprocessable Content {"detail":[{"type":"missing","loc":["body","schemeId"],"msg":"Field required","input":null}]} /documents/upload
IN BACKEND:
INFO: 127.0.0.1:53353 - "POST /documents/upload HTTP/1.1" 422
FIX: Restart Browser or Right Click Reload Page button -> Clear Cache and Hard Reload
ERROR:root:Error in parse_pdf: MineruParser._run_mineru_command() got an unexpected keyword argument 'parser'
FIX: https://github.com/HKUDS/RAG-Anything/pull/113
pip install "mineru[core] @ git+https://github.com/opendatalab/MinerU.git"
pip install "raganything[all] @ git+https://github.com/HKUDS/RAG-Anything.git@ui"
So, here are commands i did for clean install LightRAG with MinerU support.
mkdir my-rag && cd my-rag
# Create python venv in new folder
python3 -m venv .venv
. .venv/bin/activate
# Install correct combination of packages
pip install "lightrag-hku[api] @ git+https://github.com/HKUDS/LightRAG.git@RAGAnything"
pip install "mineru[core] @ git+https://github.com/opendatalab/MinerU.git"
pip install "raganything[all] @ git+https://github.com/HKUDS/RAG-Anything.git@ui"
# save env.example file
wget "https://raw.githubusercontent.com/HKUDS/LightRAG/refs/heads/RAGAnything/env.example"
# copy and edit .env file
cp env.example .env
# nano .env
# launch server
lightrag-server
@hzywhite also am waiting for this to be merged, but I'm curious lightrag/api/webui/assets whats with all these compiled assets. They don't look to be intentionally there, at least IMO they shouldn't.
This is a highly anticipated feature, and I’ll be able to dedicate time to researching and testing it only after addressing my current tasks. Please resolve the conflicts with the main branch first. Thank you.
@hzywhite also am waiting for this to be merged, but I'm curious lightrag/api/webui/assets whats with all these compiled assets. They don't look to be intentionally there, at least IMO they shouldn't.
This is to clone the repository and run the server without rebuilding the frontend project.
@hzywhite also am waiting for this to be merged, but I'm curious lightrag/api/webui/assets whats with all these compiled assets. They don't look to be intentionally there, at least IMO they shouldn't.
This is to clone the repository and run the server without rebuilding the frontend project.
@danielaskdd Thanks for the reply. Just to clarify, does the CI pipeline verify or rebuild these compiled assets to prevent the possibility of malicious code being injected through a PR by a bad-faith contributor?
@hzywhite also am waiting for this to be merged, but I'm curious lightrag/api/webui/assets whats with all these compiled assets. They don't look to be intentionally there, at least IMO they shouldn't.
This is to clone the repository and run the server without rebuilding the frontend project.
@danielaskdd Thanks for the reply. Just to clarify, does the CI pipeline verify or rebuild these compiled assets to prevent the possibility of malicious code being injected through a PR by a bad-faith contributor?
The CI pipeline-generated frontend build code cannot be directly added to the repository, correct? Are you suggesting that the CI pipeline should build the frontend assets and push them to PyPI instead? I don't have experience with this process—could you please share your insights?
@danielaskdd Thanks for the reply. Just to clarify, does the CI pipeline verify or rebuild these compiled assets to prevent the possibility of malicious code being injected through a PR by a bad-faith contributor?
The CI pipeline-generated frontend build code cannot be directly added to the repository, correct? Are you suggesting that the CI pipeline should build the frontend assets and push them to PyPI instead? I don't have experience with this process—could you please share your insights?
To keep this thread focused, I opened a new issue
@7frank I also tested this locally and I think you are incorrect. The api key does not get overridden and that edit is on purpose so that raganything uses the same llm binding as lightrag. My only comment would be that the queries don't utilize the vlm enhanced query from raganything and there should probably be two additional query routes for raganything queries. Other than that, this works great for me locally following the simple instructions included on the PR
Lacking a merge on these is pretty inconvenient for the time being. Raganything enhances lightrag 1000x
@sicarius97
@7frank I also tested this locally and I think you are incorrect. The api key does not get overridden and that edit is on purpose so that raganything uses the same llm binding as lightrag. My only comment would be that the queries don't utilize the vlm enhanced query from raganything and there should probably be two additional query routes for raganything queries. Other than that, this works great for me locally following the simple instructions included on the PR
Lacking a merge on these is pretty inconvenient for the time being. Raganything enhances lightrag 1000x
Possible. Did you try accessing LightRAG via the API or through the web frontend?
In my case, I wanted to access the LightRAG API using a simple MCP. Before the merge, this worked without authentication. After the merge, all routes suddenly required an API key that was never set — LIGHTRAG_API_KEY.
My experience was as follows:
Routes such as the documents route now use an API key for authentication. See here:
https://github.com/HKUDS/LightRAG/blob/9bc5f1578cdc36e204d4ee5491c1a0ab086ca40d/lightrag/api/lightrag_server.py#L762-L769
The API key they use is LIGHTRAG_API_KEY, defined here:
https://github.com/HKUDS/LightRAG/blob/9bc5f1578cdc36e204d4ee5491c1a0ab086ca40d/lightrag/api/lightrag_server.py#L199
However, in the PR, it was overridden by this line: https://github.com/HKUDS/LightRAG/blob/9bc5f1578cdc36e204d4ee5491c1a0ab086ca40d/lightrag/api/lightrag_server.py#L631
As a result, when using LightRAG via the API (not the web frontend), it now returns 401 errors because the API requires a key, even though no LIGHTRAG_API_KEY is set.
I renamed the variable for the local scope, and the error disappeared.
Really looking forward to the merger of raganything and the lightrag server
Is it possible to help here to getting this done? Looking really forward.
Hope merge will happen 🙏
Any updates? I'm really hoping this PR gets merged.