Faithfulness Errors
- [x] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question I got the following error.
ERROR:ragas.executor:Exception raised in Job[9]: AttributeError('NoneType' object has no attribute 'generate')
Code Examples
import openai
from ragas.experimental.metrics._faithfulness import FaithfulnessExperimental
from ragas.metrics.base import get_segmenter
from ragas.llms import llm_factory
from datasets import Dataset
from ragas import evaluate
from ragas.metrics import (
faithfulness,
)
LLM_NAME = "gpt-4o-mini"
ragas_llm = llm_factory(model=LLM_NAME)
# データセットの準備
dataset = Dataset.from_dict(
{
"question": questions,
"answer": answers,
"contexts": contexts,
"ground_truth": ground_truths,
}
)
# Faithfulnessの言語を指定
faithfulness = FaithfulnessExperimental()
faithfulness.sentence_segmenter = get_segmenter(language="japanese", clean=False)
faithfulness.llm = ragas_llm
# Use the adapted faithfulness metric in your evaluation
score = evaluate(
dataset,
llm=ragas_llm,
metrics=[
faithfulness,
],
)
print(score)
The error you're encountering, 'NoneType' object has no attribute 'generate', suggests that the llm object might not be properly initialized or set. Here are a few steps to troubleshoot and resolve the issue:
-
Check LLM Initialization: Ensure that
ragas_llmis correctly initialized by thellm_factoryfunction. Verify that thellm_factoryfunction is returning a valid LLM instance and notNone. -
Verify LLM Assignment: Make sure that
faithfulness.llm = ragas_llmis correctly assigning the LLM to theFaithfulnessExperimentalinstance. The LLM should not beNoneat this point. -
Inspect
llm_factory: Ifheliconeconfiguration is enabled, ensure that all necessary parameters likedefault_headersandbase_urlare correctly set and notNone[1]. -
Debugging: Add print statements or use a debugger to check the state of
ragas_llmafter initialization to confirm it is notNone.
By following these steps, you should be able to identify and fix the source of the error.
To continue talking to Dosu, mention @dosu.
@dosu @dosubo I have almost similar code with the same model gpt 4o mini and i counter the same error and when i checked what is inside llm_factor(model= MODEL_NAME) i found that LangchainLLMWrapper(run_config=RunConfig(timeout=180, max_retries=10, max_wait=60, max_workers=16, exception_types=<class 'openai.RateLimitError'>, log_tenacity=False, seed=42))