graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

A modular graph-based Retrieval-Augmented Generation (RAG) system

Results 440 graphrag issues
Sort by recently updated
recently updated
newest added

Our current docsite is focused on the narrative "why", and using the CLI entry points. We should also have proper API documentation. In order to achieve this, we should move...

documentation

Currently if TPM quota is exceeded while indexing data, an error message `Error Invoking LLM` gets printed out in the logs. With throttling and rate-limiting being used, this particular message...

triage

⠏ GraphRAG Indexer ├── Loading Input (text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00 ├── create_base_text_units ├── create_base_extracted_entities └── create_summarized_entities ├── create_summarized_entities ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━ 82% 0:00:01 0:00:00 ❌...

The configuration of `tokens_per_minutes` in `settings.yaml` seems not to be adapted by the indexing engine. I've tried setting it to both `50000` and `50_000` (as per the commented example) but...

bug
High Priority!
fixed

When trying to use the `graphrag.prompt_tune` with `python -m graphrag.prompt_tune --root . --no-entity-types` using the following settings.yaml: ``` encoding_model: cl100k_base skip_workflows: [] llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat...

Fix syntax error/typo in `overview.md`.

Hi! I was able to reproduce the example at: https://microsoft.github.io/graphrag/posts/get_started/ However when I switch to use the exact same method but with some _shorter fictional stories_, it crashes during the...

## Description When setting `tokens_per_minute` and `requests_per_minute` of llm in setting.yaml, it doesn't take effect. This is because config reader only read `tpm` and `rpm` instead of the item in...

settings.yaml config the llm to llama3 in groq or any other model compatible with OAI API. ```yaml llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat model: llama3-8b-8192 model_supports_json: false #...

when you override llm in ```yaml community_report: llm: ``` it always use the llm in setting.yaml when `create_final_community_reports`

bug