ragas icon indicating copy to clipboard operation
ragas copied to clipboard

Faithfulness Errors

Open TakutoIyanagi-littletree opened this issue 1 year ago • 2 comments

  • [x] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question I got the following error.

ERROR:ragas.executor:Exception raised in Job[9]: AttributeError('NoneType' object has no attribute 'generate')

Code Examples

import openai
from ragas.experimental.metrics._faithfulness import FaithfulnessExperimental
from ragas.metrics.base import get_segmenter
from ragas.llms import llm_factory
from datasets import Dataset
from ragas import evaluate
from ragas.metrics import (
    faithfulness,
)

LLM_NAME = "gpt-4o-mini"
ragas_llm = llm_factory(model=LLM_NAME)

# データセットの準備
dataset = Dataset.from_dict(
    {
        "question": questions,
        "answer": answers,
        "contexts": contexts,
        "ground_truth": ground_truths,
    }
)

# Faithfulnessの言語を指定
faithfulness = FaithfulnessExperimental()
faithfulness.sentence_segmenter = get_segmenter(language="japanese", clean=False)
faithfulness.llm = ragas_llm

# Use the adapted faithfulness metric in your evaluation
score = evaluate(
    dataset,
    llm=ragas_llm,
    metrics=[
        faithfulness,
        ],
                 )
print(score)

The error you're encountering, 'NoneType' object has no attribute 'generate', suggests that the llm object might not be properly initialized or set. Here are a few steps to troubleshoot and resolve the issue:

  1. Check LLM Initialization: Ensure that ragas_llm is correctly initialized by the llm_factory function. Verify that the llm_factory function is returning a valid LLM instance and not None.

  2. Verify LLM Assignment: Make sure that faithfulness.llm = ragas_llm is correctly assigning the LLM to the FaithfulnessExperimental instance. The LLM should not be None at this point.

  3. Inspect llm_factory: If helicone configuration is enabled, ensure that all necessary parameters like default_headers and base_url are correctly set and not None [1].

  4. Debugging: Add print statements or use a debugger to check the state of ragas_llm after initialization to confirm it is not None.

By following these steps, you should be able to identify and fix the source of the error.

To continue talking to Dosu, mention @dosu.

dosubot[bot] avatar Oct 08 '24 05:10 dosubot[bot]

@dosu @dosubo I have almost similar code with the same model gpt 4o mini and i counter the same error and when i checked what is inside llm_factor(model= MODEL_NAME) i found that LangchainLLMWrapper(run_config=RunConfig(timeout=180, max_retries=10, max_wait=60, max_workers=16, exception_types=<class 'openai.RateLimitError'>, log_tenacity=False, seed=42))

saidamahmoud avatar Oct 09 '24 09:10 saidamahmoud