【HELP!! TimeoutError,seems that can not connect to the api server】
ragas version 0.1.12
Describe the bug I try to run the demo with my openai key, which i have test my key is valid and i can get the correct response via: curl https://api.gptsapi.net/v1/models -H "Authorization: Bearer sk-WBR2fc39b876ab1..“ However, when i run the demo, it can not work well, and output was like:
Evaluating: 0%| | 0/4 [01:20<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1558, in _request
response = await self._client.send(
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1661, in send
response = await self._send_handling_auth(
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
response = await self._send_handling_redirects(
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
response = await self._send_single_request(request)
File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1763, in _send_single_request
response = await transport.handle_async_request(request)
File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 373, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
raise exc from None
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
raise exc
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
stream = await self._connect(request)
File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection.py", line 122, in _connect
stream = await self._network_backend.connect_tcp(**kwargs)
File "/usr/local/lib/python3.10/site-packages/httpcore/_backends/auto.py", line 30, in connect_tcp
return await self._backend.connect_tcp(
File "/usr/local/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 116, in connect_tcp
stream: anyio.abc.ByteStream = await anyio.connect_tcp(
File "/usr/local/lib/python3.10/site-packages/anyio/_core/_sockets.py", line 214, in connect_tcp
async with create_task_group() as tg:
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 690, in __aexit__
raise cancelled_exc_while_waiting_tasks
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 670, in __aexit__
await asyncio.wait(self._tasks)
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 384, in wait
return await _wait(fs, timeout, return_when, loop)
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 491, in _wait
await waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/ragas/metrics/_answer_correctness.py", line 220, in _ascore
item_statement = await self.llm.generate(p_value, callbacks=callbacks)
File "/usr/local/lib/python3.10/site-packages/ragas/llms/base.py", line 95, in generate
return await agenerate_text_with_retry(
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 189, in async_wrapped
return await copy(fn, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 111, in __call__
do = await self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
result = await action(retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner
return call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/__init__.py", line 114, in __call__
result = await fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/ragas/llms/base.py", line 176, in agenerate_text
return await self.langchain_llm.agenerate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 724, in agenerate_prompt
return await self.agenerate(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 650, in agenerate
results = await asyncio.gather(
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/workspace/test.py", line 18, in <module>
score = evaluate(dataset,metrics=[faithfulness,answer_correctness])
File "/usr/local/lib/python3.10/site-packages/ragas/evaluation.py", line 247, in evaluate
raise e
File "/usr/local/lib/python3.10/site-packages/ragas/evaluation.py", line 227, in evaluate
results = executor.results()
File "/usr/local/lib/python3.10/site-packages/ragas/executor.py", line 107, in results
results = asyncio.run(_aresults())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/usr/local/lib/python3.10/site-packages/ragas/executor.py", line 102, in _aresults
r = await future
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one
return f.result() # May raise f.exception().
File "/usr/local/lib/python3.10/site-packages/ragas/executor.py", line 34, in sema_coro
return await coro
File "/usr/local/lib/python3.10/site-packages/ragas/executor.py", line 59, in wrapped_callable_async
raise e
File "/usr/local/lib/python3.10/site-packages/ragas/executor.py", line 53, in wrapped_callable_async
result = await callable(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/ragas/metrics/base.py", line 127, in ascore
raise e
File "/usr/local/lib/python3.10/site-packages/ragas/metrics/base.py", line 120, in ascore
score = await asyncio.wait_for(
File "/usr/local/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
Ragas version: 0.1.12 Python version: 3.10
Code to Reproduce
from datasets import Dataset
import os
from ragas import evaluate
from ragas.metrics import faithfulness, answer_correctness
os.environ["OPENAI_API_KEY"] = "sk-WBR2fc39b876ab1dc7ee7..."
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)
score = evaluate(dataset,metrics=[faithfulness,answer_correctness])
print(score.to_pandas())
Additional context can anybody give my some useful advise, thank u sooo much!!
Hey there, @haohao0725! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together!
To resolve the TimeoutError you're experiencing, you can try increasing the timeout settings for the HTTP requests made by the ragas library. Here is an example of how you might increase the timeout settings:
import os
from datasets import Dataset
from ragas import evaluate
from ragas.metrics import faithfulness, answer_correctness
from ragas.run_config import RunConfig
from ragas.llms.base import llm_factory
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "sk-WBR2fc39b876ab1dc7ee7..."
# Increase the timeout settings
run_config = RunConfig(timeout=120.0, thread_timeout=120) # Increase timeout to 120 seconds
# Create the LLM with the increased timeout settings
llm = llm_factory(run_config=run_config)
# Define your data samples
data_samples = {
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
# Create the dataset
dataset = Dataset.from_dict(data_samples)
# Evaluate the dataset with the LLM that has increased timeout settings
score = evaluate(dataset, metrics=[faithfulness, answer_correctness], llm=llm, run_config=run_config)
print(score.to_pandas())
If the issue persists, consider checking your network connection, ensuring that the OpenAI API service is operational, and verifying that there are no rate limits or other restrictions affecting your API usage [1].
To continue talking to Dosu, mention @dosu.
@haohao0725 do checkout dosu's commit. will also be addressing it in #1156
Closing this as resolved.