graphrag
graphrag copied to clipboard
A modular graph-based Retrieval-Augmented Generation (RAG) system
### Describe the issue When attempting to convert CSV data into a YAML format, specifying a custom column for the timestamp results in a ValueError. The exception is raised within...
### Describe the issue https://github.com/win4r/GraphRAG4OpenWebUI ### Steps to reproduce _No response_ ### GraphRAG Config Used _No response_ ### Logs and screenshots _No response_ ### Additional Information - GraphRAG Version: -...
### Describe the issue I use local LLM to run graphrag and completed all workflows successfully, but graphrag failed to answer any question given the provided data I found there's...
[Issue]: <title> ❌ Errors occurred during the pipeline run, see logs for more details. but different
### Describe the issue ❌ create_base_entity_graph None ### Steps to reproduce run the getstart ### GraphRAG Config Used encoding_model: cl100k_base skip_workflows: [] llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat...
### Describe the bug _No response_ ### Steps to reproduce _No response_ ### Expected Behavior _No response_ ### GraphRAG Config Used _No response_ ### Logs and screenshots  ### Additional...
### Describe the bug  If you look at the below function: https://github.com/microsoft/graphrag/blob/309abc982f158c38099c6098d30b35a20972d258/graphrag/index/graph/extractors/graph/graph_extractor.py#L148C5-L182C23 ``` async def _process_document( self, text: str, prompt_variables: dict[str, str] ) -> str: response = await self._llm(...
### Describe the issue Use vllm to launch a local large model, in the style of openai,but it won't work ### Steps to reproduce step1:python -m vllm.entrypoints.openai.api_server --max-model-len 6144 --gpu-memory-utilization...
I used the Chinese manual document to build it, and found that the extracted entities were very messy. Is there any good way to optimize it?
### Describe the bug I tried setting the max_gleanings to 0 in settings.yaml to prevent the gleaning step from happening but in the indexing-engine.log file max_gleanings is still 1 and...
### Describe the issue My text is Vietnamese but I receive the community report/content are English. I have tried to custom the prompt or translate the prompt to Vietnamese. Though...