ragas icon indicating copy to clipboard operation
ragas copied to clipboard

Evaluating Langchain QA Chains without openai key

Open Bamore01 opened this issue 1 year ago • 2 comments

[ ] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question I want to evaluate a QA system from Langchain, but without using the OpenAI key. My model is called 'lama2' and I've loaded my embedding model from Hugging Face. Is it possible to evaluate it like this? from ragas.langchain.evalchain import RagasEvaluatorChain from ragas.metrics import ( faithfulness, answer_relevancy, context_precision, context_recall, )

Create evaluation chains

faithfulness_chain = RagasEvaluatorChain(metric=faithfulness) answer_rel_chain = RagasEvaluatorChain(metric=answer_relevancy) context_rel_chain = RagasEvaluatorChain(metric=context_precision) context_recall_chain = RagasEvaluatorChain(metric=context_recall)

Code Examples model = AutoModelForCausalLM.from_pretrained(model_path_or_repo_id="TheBloke/Llama-2-7B-Chat-GGML") embed_model_id = "distiluse-base-multilingual-cased-v1"

embed_model = HuggingFaceEmbeddings( model_name=embed_model_id, model_kwargs={"device": device}, encode_kwargs={"device": device, "batch_size": 32} ) embeddings = embed_model.embed_documents(docs)

import transformers, torch from transformers import pipeline from langchain import HuggingFacePipeline generate_text = pipeline( model=model, tokenizer=tokenizer, return_full_text=True, task='text-generation', temperature=0.4, max_new_tokens=512, repetition_penalty=1.1
)

llm = HuggingFacePipeline(pipeline=generate_text)

prompt = PromptTemplate(input_variables=["context","question"], template=template)

from langchain_core.prompts import BasePromptTemplate from langchain.chains import RetrievalQA

qa = RetrievalQA.from_chain_type( llm=llm,chain_type='stuff', retriever=vectorstore.as_retriever(search_kwargs={"k":2}), chain_type_kwargs={'prompt': prompt} ) Without the OpenAI key and considering my model and my embeddings? Additional context I have already read the following links: "https://docs.ragas.io/en/v0.1.1/howtos/integrations/langchain.html" and "https://docs.ragas.io/en/stable/howtos/customisations/bring-your-own-llm-or-embs.html" and they are not helpful to me.

Bamore01 avatar Mar 26 '24 19:03 Bamore01

I also have the same issue.

In ragas-0.1.6, the kwargs.get syntax used within ragas.integrations.langchain.EvaluatorChain should initialize the OpenAI API key as OPENAI_API_KEY.

Code:

class EvaluatorChain(Chain, RunEvaluator):

metric: Metric

def __init__(self, metric: Metric, **kwargs: t.Any):
    kwargs["metric"] = metric
    super().__init__(**kwargs)
    if "run_config" in kwargs:
        run_config = kwargs["run_config"]
    else:
        run_config = RunConfig()
    if isinstance(self.metric, MetricWithLLM):
        llm = kwargs.get("llm", ChatOpenAI())     ##  Look here, OpenAI will be initialized
        t.cast(MetricWithLLM, self.metric).llm = LangchainLLMWrapper(llm)
    if isinstance(self.metric, MetricWithEmbeddings):
        embeddings = kwargs.get("embeddings", OpenAIEmbeddings())
        t.cast(
            MetricWithEmbeddings, self.metric
        ).embeddings = LangchainEmbeddingsWrapper(embeddings)
    self.metric.init(run_config)

ylltw01 avatar Apr 02 '24 08:04 ylltw01

This sounds like a bug to me, since it always calls ChatOpenAI() even if the provided LLM is different than GPT

Same concern on the other kwargs.get call, where OpenAIEmbeddings() always get called

mrgiba avatar Apr 15 '24 11:04 mrgiba

Hi @mrgiba @ylltw01 @Bamore01 , is there any workaround to pass huggingface models and embeddings to evaluate langchain qa chains?

akashlp27 avatar Jun 25 '24 18:06 akashlp27