ragas icon indicating copy to clipboard operation
ragas copied to clipboard

Cannot use custom OpenAI Endpoint

Open peanutshawny opened this issue 1 year ago • 6 comments

For context, I am installing ragas from source.

I'm using a custom OpenAI endpoint (not the default https://api.openai.com/v1) and can't seem to run evaluations.

When I try executing evaluate on my data, I get back a 401 saying my request is unauthorized. It turns out, your wrapper around the langchain ChatOpenAI wrapper doesn’t allow for a specific endpoint to be inputted, even though langchain technically allows for it under the hood.

I've tried working around this by following your instructions here on how to customize ragas with my own LLM, and defining my model as an ChatOpenAI object with my custom OpenAI endpoint. This didn't seem to work. Any idea how to make it work for custom OpenAI endpoints? I feel like you could just allow for the user to input their own OpenAI endpoint when initializing llm_factory here.

peanutshawny avatar Jan 21 '24 20:01 peanutshawny

could you specify which version of ragas your using? if its from main - that should not be the case

just create the Langchain OpenAI endpoint as you would normally and pass it to the llm argument for evaluate() function and it should work. Let me know if works?

jjmachan avatar Jan 22 '24 01:01 jjmachan

after looking into it a bit more, the test successfully completes when I only evaluate faithfulness, but fails when I also try to evaluate answer relevance. I believe this is because answer relevance tries to use the default openai embeddings endpoint (api.openai.com/v1/embeddings) and not the one I specified. I am installing ragas from main.

here's my code:

os.environ["OPENAI_API_KEY"] = "my-openai-key"
os.environ["OPENAI_BASE"] = "my-base-url"

from langchain.chat_models import ChatOpenAI
from ragas.llms.base import LangchainLLMWrapper

gpt4 = ChatOpenAI(model_name="gpt-4", api_key=os.environ["OPENAI_API_KEY"], base_url=os.environ["OPENAI_BASE"])
gpt4_wrapper = LangchainLLMWrapper(langchain_llm=gpt4)

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
)
from ragas.evaluation import evaluate

eval_qa_dataset = Dataset.from_pandas(eval_qa_df)
    
metrics = [
    answer_relevancy,
    faithfulness
]

for metric in metrics:
    metric.llm = gpt4_wrapper

result = evaluate(
    llm=gpt4,
    dataset=eval_qa_dataset,
    metrics=[
        answer_relevancy,
        faithfulness
    ]
)

do I need to define the embeddings model separately? if so, how do I do that?

peanutshawny avatar Jan 22 '24 03:01 peanutshawny

nevermind, I figured out how to define the embeddings model and pass it into the evaluation:

os.environ["OPENAI_API_KEY"] = "my-openai-key"
os.environ["OPENAI_BASE"] = "my-base-url"

from langchain.chat_models import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
from ragas.llms.base import LangchainLLMWrapper

gpt4 = ChatOpenAI(model_name="gpt-4", api_key=os.environ["OPENAI_API_KEY"], base_url=os.environ["OPENAI_BASE"])
ada_002 = OpenAIEmbeddings(model="text-embedding-ada-002", api_key=os.environ["OPENAI_API_KEY"], base_url=os.environ["OPENAI_BASE"])
gpt4_wrapper = LangchainLLMWrapper(langchain_llm=gpt4)

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
)
from ragas.evaluation import evaluate

eval_qa_dataset = Dataset.from_pandas(eval_qa_df)
    
metrics = [
    answer_relevancy,
    faithfulness
]

for metric in metrics:
        metric.llm = gpt4_wrapper
        metric.embeddings = ada_002

result = evaluate(
        llm=gpt4_wrapper,
        embeddings=ada_002,
        dataset=eval_qa_dataset,
        metrics=[
            answer_relevancy,
            faithfulness
    ]
)

in the future, it would be cool if we are able to define custom endpoints/custom models more seamlessly. many companies may have their own proxies or custom endpoints that they set up with openai.

peanutshawny avatar Jan 22 '24 04:01 peanutshawny

I am so confused by these answers, I cannot replicate them... from ragas.llms.base import LangchainLLMWrapper doesnt even exist. I am using ragas 0.0.22

cosmin-novac avatar Jan 23 '24 09:01 cosmin-novac

I am so confused by these answers, I cannot replicate them... from ragas.llms.base import LangchainLLMWrapper doesnt even exist. I am using ragas 0.0.22

you have to be installing ragas from the main branch. this functionality is only available in main at the moment. you can verify yourself by looking at the contents of ragas.llms.base in the main branch compared to v.0.0.22.

git clone https://github.com/explodinggradients/ragas.git 
cd ragas 
pip install -e .

peanutshawny avatar Jan 23 '24 18:01 peanutshawny

hey @peanutshawny I'm glad you unblocked your self and for sharing your learnings here 🙂

I was however curious about what you said here

in the future, it would be cool if we are able to define custom endpoints/custom models more seamlessly. many companies may have their own proxies or custom endpoints that they set up with openai.

could you explain it a bit more or show some code? are you using an libs/SDKs to interact with this? is it OpenAI API compatible? which routing service are you using?

jjmachan avatar Feb 05 '24 21:02 jjmachan