ragas
ragas copied to clipboard
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
[x] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** Unable to instantiate `HuggingfaceEmbeddings`, because it doesn't implement asynchronous embed methods of base...
the `CostCallbackHandler.total_cost` method can take a dictionary of models and associated cost per i/o tokens. However, this was not functional because the token usage parser did not save the model's...
[yes] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** when setting query distribution to `MultiHopAbstractQuerySynthesizer`, test generation fails. Ragas version: 0.2.6 Python...
# Problem default_transform() uses token lengths up to 100k (0-100k interval) and seperates it into three bins. But for longer documents with token length >100k and 0 this function raises...
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question. **Your Question** I guess that I should use LangchainLLMWrapper to add deepseek api,...
- [x] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question. **My Question** I'm getting exception ``` File "/my/path/to/ragas/prompt/pydantic_prompt.py", line 423, in parse_output_string raise...
LangchainLLMWrapper sets a default temperature even with models that don't support it (ex o3-mini)
[x] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. **Describe the bug** When I try to create an LLM instance using LangchainLLMWrapper and specify a...
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question. **Your Question** rags version: 0.28.0 I use the script in the documentation to...
## Issue Link / Problem Description - Fixes https://github.com/vibrantlabsai/ragas/issues/2457 (experiment concurrency throttling) Users running `@experiment().arun()` couldn’t limit concurrent async tasks to honor provider rate limits (e.g., Azure OpenAI). Unlike `evaluate()`,...
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug. I want to know why the output tokens are getting exceeded while evaluation is in...