R2R icon indicating copy to clipboard operation
R2R copied to clipboard

The all-in-one solution for RAG. Build, scale, and deploy state of the art Retrieval-Augmented Generation applications

Results 141 R2R issues
Sort by recently updated
recently updated
newest added

**Is your feature request related to a problem? Please describe.** It isn't a problem, but the R2R-Dashboard provides a drop-down for model selection, at the same time it seems I...

Thank you so much for this project and your efforts to make GraphRAG accessible for the masses! **Is your feature request related to a problem? Please describe.** Systems with an...

Hi, I have set up my environment to use Ollama with R2R. While the CLI commands provided in the documentation work fine for search, I encounter an exception when using...

**Describe the bug** I have tried with open ai and the r2r is working fine. But when i switch to ollama its not giving the answer, have added detailed errors....

Hi, I have recently started using R2R. I have followed the documentation and set up Postgres + pgvector. While trying to start the R2R server using the following docker command:...

Hello, I am trying to install R2R locally on an Ubuntu 22.04 system and following are the steps I followed and then got the error: conda create -n r2r python=3.11...

It's great that neo4j is already your KG provider, not sure if you're aware of but Neo4j added vector indexing and search (HNSW) last summer. So you can do everything...

**Describe the bug** Apparently min_p isn't supported in R2R, a massive oversight pls fix. ![image](https://github.com/user-attachments/assets/1ae34014-b88c-4044-84d9-52f07f4b8588) ![image](https://github.com/user-attachments/assets/a2d45e73-0b1d-46f1-a45d-5798f25a6885) ![image](https://github.com/user-attachments/assets/ce2daaea-02f5-47c9-a0a6-679479b5283d)

Feature request about adding llm and embedding provider [Solar](https://www.upstage.ai/solar-llm). It is number one llm provider in APAC. Can be used by their API or aws jumpstart. Solar shows very nice...

**Describe the bug** I'm using the following config: ``` { "app": { "max_file_size_in_mb": 100 }, "embedding": { "provider": "ollama", "base_model": "nomic-embed-text", "base_dimension": 768, "batch_size": 32 }, "completions": { "provider": "litellm",...