ragas
ragas copied to clipboard
Unable to import evaluate from ragas.llama_index
[X] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug When trying to reproduce the code from Ragas documentation
The following line of code throws an import error:
from ragas.llama_index import evaluate
The module does not seem implemented. Is the documentation up to date?
Ragas version: 0.1.3 Python version: 3.10
Code to Reproduce Just try to reproduce the code from Ragas integration with LlamaIndex
Error trace
ModuleNotFoundError Traceback (most recent call last) Cell In[19], line 1 ----> 1 from ragas.llama_index import evaluate
ModuleNotFoundError: No module named 'ragas.llama_index' Expected behavior A clear and concise description of what you expected to happen.
Additional context Add any other context about the problem here.
Same error with python 3.10.6 too
@vecorro I read this in a medium article - "It is worth mentioning that if you install the latest version(v0.1.0rc1) using pip install git+https://github.com/explodinggradients/ragas.git, there is no support for LlamaIndex. "
What worked for me is downgrading by pip uninstalling ragas and llama-index and then pip install ragas==0.0.22 pip install llama-index==0.6.9
Here is my quick fix, not sure where I got it:
from ragas.metrics.base import MetricWithLLM
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from ragas.llms import LangchainLLMWrapper, llm_factory
from ragas.metrics import (
faithfulness,
answer_relevancy,
context_precision,
context_recall,
)
def init_all_metrics() -> list[MetricWithLLM]:
metrics = [
faithfulness,
answer_relevancy,
context_precision,
context_recall,
harmfulness,
]
ragas_llm = llm_factory()
embed_model = OpenAIEmbeddings()
# NOTE: error from RAGAS, need this fix
for m in metrics:
m.__setattr__("llm", ragas_llm)
m.__setattr__("embeddings", embed_model)
return metrics
Need to import: from ragas import evaluate
Thanks @dxv2k, I appreciate your help!
same issue
hey folks, do checkout the updated docs LlamaIndex | Ragas
but in a gist this is how it has changed
from ragas.integrations.llama_index import evaluate
result = evaluate(
query_engine=query_engine,
metrics=metrics,
dataset=ds_dict,
llm=evaluator_llm,
embeddings=OpenAIEmbedding(),
)