Open AI error with version 0.1.0
I am trying to get the following metrics using ragas 0.1.0 , open AI version as 1.12.0, llama index as 0.8.51.post1 answer_relevancy, answer_correctness but getting this error openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
my code is something like this from ragas.metrics import ( context_precision, answer_relevancy, faithfulness, context_recall, answer_correctness, context_relevancy, answer_correctness )
from ragas import evaluate
from langchain.chat_models import AzureChatOpenAI from langchain.embeddings import OpenAIEmbeddings
azure_model = AzureChatOpenAI( deployment_name="gpt-model", model="gpt-35-turbo", openai_api_base="xxxxx", openai_api_type="azure", openai_api_key="xxxxxxx", openai_api_version='xxxxx', )
azure_embeddings = OpenAIEmbeddings( deployment="text-embedding-ada-002", model="text-embedding-ada-002", openai_api_base="xxxxx", openai_api_type="azure", openai_api_key="xxxxx", openai_api_version='xxxxx', )
final_df = df_eval[['question','ground_truths','answer','contexts']]
Huggingface dataset
dataset = Dataset.from_pandas(final_df) metrics = [ faithfulness, answer_relevancy, context_precision, context_recall, context_relevancy, #answer_correctness ]
result = evaluate(dataset, metrics=metrics, embeddings = azure_embeddings, llm = azure_model)
The above code is a sample code, may be I would have missed something to add but it is working for context_precision, faithfulness, context_recall.
Please let me know what latest versions shall I use for ragas, OpenAI
I am getting the same error with similar code : my keys works fine on normal convIR method openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
Hey @jeny1292 @KINJALMARU16 can you both install ragas from the source and try again if it works?
Hey @jeny1292 @KINJALMARU16 can you both install ragas from the source and try again if it works?
Hello, are you asking me to download the latest version? I mean the git install?
@jeny1292 pip install git+https://github.com/explodinggradients/ragas.git
@jeny1292
pip install git+https://github.com/explodinggradients/ragas.git
I am still getting the same error even after installing from the source.
Hey @KINJALMARU16 @jeny1292 Can you both ensure that you're following the guide here and reinstall ragas from source again (if you did the prev install week ago or so). Asking because it's working for me.
If you have langchain 0.1 installed, try this:
Load your environment variables
azure_api_key = os.environ.get('AZURE_OPENAI_API_KEY') api_type = os.environ.get('OPENAI_API_TYPE') azure_openai_api_version = os.environ.get('OPENAI_API_VERSION') azure_api_endpoint = os.environ.get('AZURE_OPENAI_ENDPOINT') azure_gpt3_model = os.environ.get('GPT3_MODEL_NAME') azure_gpt4_model = os.environ.get('GPT4_MODEL_NAME') azure_embed_model = os.environ.get('EMBED_MODEL_NAME')
os.environ['AZURE_OPENAI_API_KEY'] = azure_api_key os.environ['AZURE_OPENAI_ENDPOINT'] = azure_api_endpoint os.environ["OPENAI_API_VERSION"] = azure_openai_api_version
from langchain_openai.embeddings import AzureOpenAIEmbeddings from langchain_openai.chat_models import AzureChatOpenAI from datasets import Dataset
from ragas import evaluate from ragas.metrics import ( context_precision, answer_relevancy, faithfulness, context_recall, #answer_correctness, context_relevancy, #answer_correctness )
If you are using Azure OpenAI Chat
azure_model = AzureChatOpenAI( api_key=azure_api_key, azure_endpoint=azure_api_endpoint, azure_deployment=azure_gpt3_model, # gpt-35-turbo Make sure this to use your deployment name for the model api_version=azure_openai_api_version, # typically 2023-07-01-preview )
If you are using Azure OpenAI Embeddings
azure_embeddings = AzureOpenAIEmbeddings( api_key=azure_api_key, azure_endpoint=azure_api_endpoint, azure_deployment=azure_embed_model, # text-embeddings-ada-002 Make sure this to use your deployment name for the model api_version=azure_openai_api_version, # typically 2023-07-01-preview )
The latest version of RAGAs is looking from ground_truth rather than ground_truths
and it should just be a string rather than a sequence of strings
final_df = df_eval[['question','ground_truths','answer','contexts']]
dataset = Dataset.from_pandas(final_df)
metrics = [ faithfulness, answer_relevancy, context_precision, context_recall, context_relevancy, #answer_correctness ]
result = evaluate( dataset=dataset, column_map={"question": "question", "contexts": "contexts", "answer": "answer", "ground_truth": "ground_truths"}, llm=azure_model, embeddings=azure_embeddings, metrics=metrics, is_async=True, )
final_results = result.to_pandas()
final_results
Hey @KINJALMARU16 @jeny1292 Can you both ensure that you're following the guide here and reinstall ragas from source again (if you did the prev install week ago or so). Asking because it's working for me.
Thank you, it is doing the calculation now and completes it...but it also shows this error RuntimeError: Event loop is closed Invalid JSON response. Expected dictionary with key 'Attributed'
If you have langchain 0.1 installed, try this:
Load your environment variables
azure_api_key = os.environ.get('AZURE_OPENAI_API_KEY') api_type = os.environ.get('OPENAI_API_TYPE') azure_openai_api_version = os.environ.get('OPENAI_API_VERSION') azure_api_endpoint = os.environ.get('AZURE_OPENAI_ENDPOINT') azure_gpt3_model = os.environ.get('GPT3_MODEL_NAME') azure_gpt4_model = os.environ.get('GPT4_MODEL_NAME') azure_embed_model = os.environ.get('EMBED_MODEL_NAME')
os.environ['AZURE_OPENAI_API_KEY'] = azure_api_key os.environ['AZURE_OPENAI_ENDPOINT'] = azure_api_endpoint os.environ["OPENAI_API_VERSION"] = azure_openai_api_version
from langchain_openai.embeddings import AzureOpenAIEmbeddings from langchain_openai.chat_models import AzureChatOpenAI from datasets import Dataset
from ragas import evaluate from ragas.metrics import ( context_precision, answer_relevancy, faithfulness, context_recall, #answer_correctness, context_relevancy, #answer_correctness )
If you are using Azure OpenAI Chat
azure_model = AzureChatOpenAI( api_key=azure_api_key, azure_endpoint=azure_api_endpoint, azure_deployment=azure_gpt3_model, # gpt-35-turbo Make sure this to use your deployment name for the model api_version=azure_openai_api_version, # typically 2023-07-01-preview )
If you are using Azure OpenAI Embeddings
azure_embeddings = AzureOpenAIEmbeddings( api_key=azure_api_key, azure_endpoint=azure_api_endpoint, azure_deployment=azure_embed_model, # text-embeddings-ada-002 Make sure this to use your deployment name for the model api_version=azure_openai_api_version, # typically 2023-07-01-preview )
The latest version of RAGAs is looking from ground_truth rather than ground_truths
and it should just be a string rather than a sequence of strings
final_df = df_eval[['question','ground_truths','answer','contexts']]
dataset = Dataset.from_pandas(final_df)
metrics = [ faithfulness, answer_relevancy, context_precision, context_recall, context_relevancy, #answer_correctness ]
result = evaluate( dataset=dataset, column_map={"question": "question", "contexts": "contexts", "answer": "answer", "ground_truth": "ground_truths"}, llm=azure_model, embeddings=azure_embeddings, metrics=metrics, is_async=True, )
final_results = result.to_pandas()
final_results
yes it is working now but shows new error after the evaluation completion RuntimeError: Event loop is closed Invalid JSON response. Expected dictionary with key 'Attributed'
Hi @shahules786 ,
ragas version : '0.1.0' langchain : '0.1.6' python: 3.9
I am using below code: (Used OpenAI llm & embedding)
from ragas.metrics import (
context_precision,
answer_relevancy,
faithfulness,
context_recall,
)
from ragas.metrics.critique import harmfulness
metrics = [
faithfulness,
answer_relevancy,
context_recall,
context_precision,
harmfulness,
]
from datasets import load_dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2")
result = evaluate(
amnesty_qa["eval"], metrics=metrics, llm=llm, embeddings=embeddings,is_async=True,
)
but I am getting below error:
File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 784, in _agenerate_helper
await self._agenerate(
File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1196, in _agenerate
await self._acall(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1155, in _acall
return await run_in_executor(
File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
File "/home/.conda/envs/ragas/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
TypeError: _call() got an unexpected keyword argument 'temperature'
Hi @shahules786 ,
ragas version : '0.1.0' langchain : '0.1.6' python: 3.9
I am using below code: (Used OpenAI llm & embedding)
from ragas.metrics import ( context_precision, answer_relevancy, faithfulness, context_recall, ) from ragas.metrics.critique import harmfulness metrics = [ faithfulness, answer_relevancy, context_recall, context_precision, harmfulness, ] from datasets import load_dataset amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2") result = evaluate( amnesty_qa["eval"], metrics=metrics, llm=llm, embeddings=embeddings,is_async=True, )but I am getting below error:
File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 784, in _agenerate_helper await self._agenerate( File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1196, in _agenerate await self._acall(prompt, stop=stop, run_manager=run_manager, **kwargs) File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1155, in _acall return await run_in_executor( File "/home/.conda/envs/ragas/lib/python3.9/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor return await asyncio.get_running_loop().run_in_executor( File "/home/.conda/envs/ragas/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) TypeError: _call() got an unexpected keyword argument 'temperature'
I tried the evaluation with below versions and it is working fine now : openai.version : '1.12.0' ragas.version : '0.0.22' langchain.version : '0.1.0'
There is some problem with new version of Ragas