ragas
ragas copied to clipboard
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
[X] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** When trying to reproduce the code from [Ragas documentation](https://docs.ragas.io/en/stable/howtos/integrations/llamaindex.html) The following line of...
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** A clear and concise description of what the bug is. Ragas version:...
Hi, I've been using the TestsetGenerator with default and other adjusted question type distributions, but in nearly all cases using a a test_size of 20-50, I get very few results...
[ .] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** following [this guide](https://docs.ragas.io/en/stable/howtos/applications/compare_llms.html#id1) runs into an error instead of running successfully because...
**Describe the bug** Seems like `context_recall` is not working in Azure OpenAI Ragas version: v0.1.1 Python version: **Code to Reproduce** this was from a user, but will try to reproduce...
In our practical use, we have found that in addition to the existing metrics of RAGAS, we also place great importance on metrics related to the quality of the answers...
When I evaluated the last piece of data (Evaluating 99%), the program got stuck. I had to run the entire evaluation process from scratch. Can I cache the intermediate results...
**Describe the bug** Hi, I am trying to setup evaluation on my custom dataset using Ragas but getting the following error when trying to evaluate for context_recall metric: ```bash evaluating...
**Describe the Feature** A class to generate synthetic questions or dataset from existing document store or Index. **Why is the feature important for you?** This will reduce LLM calls and...
**Describe the bug** LangchainLLMWrapper has .generate_text() function which further calls .generate_prompt() of the underlying LLM. The LangchainLLMWrapper passes 'temperature' parameter in .generate_prompt() function which causes the following issues, 1. temperature...