litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: random `Give Feedback / Get Help` shows up in logs

Open jamesbraza opened this issue 1 year ago • 6 comments

What happened?

Randomly in logs, with litellm==1.48.2, a LiteLLM error will show up:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

However, this message (source) gives no context. I don't want to litellm.set_verbose=True as it's way too verbose for my logs.

Can we have this message at least give some context (failure message, module + line number, etc.)

Relevant log output

2024-09-27 11:36:58,701 - paperqa.agents.tools - INFO - Status: Paper Count=8 | Relevant Papers=1 | Current Evidence=3 | Current Cost=$0.1187
2024-09-27 11:36:59,221 - paperqa.agents.tools - INFO - gather_evidence starting for question 'Timing of blastema emergence in pak1(RNAi) planarians after amputation'.
2024-09-27 11:37:00,758 - paperqa.agents.tools - INFO - Status: Paper Count=14 | Relevant Papers=2 | Current Evidence=3 | Current Cost=$0.2063

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

2024-09-27 11:37:05,058 - paperqa.agents.tools - INFO - Status: Paper Count=8 | Relevant Papers=1 | Current Evidence=1 | Current Cost=$0.2576
2024-09-27 11:37:06,174 - paperqa.agents.tools - INFO - Generating answer for 'When do blastema become apparent in amputated pak1(RNAi) planarians?'.

Twitter / LinkedIn details

No response

jamesbraza avatar Sep 27 '24 18:09 jamesbraza

Also, when you actually do litellm.set_verbose=True, then you get a deprecation warning:

2024-09-27 12:15:47,960 - LiteLLM - WARNING - `litellm.set_verbose` is deprecated. Please set `os.environ['LITELLM_LOG'] = 'DEBUG'` for debug logs.

So another related request is this default message, let's update it so what it suggests is not a deprecation warning

jamesbraza avatar Sep 27 '24 19:09 jamesbraza

Okay, I locally edited the source to have traceback.print_exception(type(original_exception), original_exception, original_exception.__traceback__) here, and this is the error:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 377, in handle_async_request
    resp = await self._pool.handle_async_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
    raise exc from None
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
    response = await connection.handle_async_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
    raise exc
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
    stream = await self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 154, in _connect
    stream = await stream.start_tls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 68, in start_tls
    with map_exceptions(exc_map):
  File "/path/to/.pyenv/versions/3.12.5/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1554, in _request
    response = await self._client.send(
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1674, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1702, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1776, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 376, in handle_async_request
    with map_httpcore_exceptions():
  File "/path/to/.pyenv/versions/3.12.5/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 944, in acompletion
    headers, response = await self.make_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 639, in make_openai_chat_completion_request
    raise e
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 627, in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 370, in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1412, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1821, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1515, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1588, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/main.py", line 430, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 995, in acompletion
    raise OpenAIError(
litellm.llms.OpenAI.openai.OpenAIError: Connection error.

It seems to be a flaky error from OpenAI, an openai.APIConnectionError.

@krrishdholakia why is LiteLLM not auto-retrying this without throwing an unhandled stack trace?

jamesbraza avatar Sep 27 '24 22:09 jamesbraza

Can we have this message at least give some context (failure message, module + line number, etc.)

The message is raised by completion -> the retries are handled by the router. Might be helpful

There's nothing to suggest the error isn't being retried - do you see a failed request?

krrishdholakia avatar Sep 27 '24 23:09 krrishdholakia

Thanks for responding. What I see in my logs is the below showing up a lot, which makes me think something is failing.

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

I think LiteLLM should only print this ^ message when something is not being retried. If a retry is taking place, I don't want to see a random failure message showing up in my logs. Does that make sense?

jamesbraza avatar Sep 27 '24 23:09 jamesbraza

@jamesbraza it should be simple to disable this:

litellm_settings:
    suppress_debug_info: false

Does this solve your problem?

krrishdholakia avatar Sep 27 '24 23:09 krrishdholakia

So I am aware one can configure that, but in general I am trying to point out that this a bad default behavior.

I think LiteLLM should change it's default so these messages only come up when there's a critical or non-retryable error

jamesbraza avatar Sep 27 '24 23:09 jamesbraza

@krrishdholakia I echo James' view on this. We also saw lots of these error messages in our logs, and erroneously assumed something was going wrong. Having this error accompany both successful and unsuccessful requests makes debugging time-consuming, and increases the likelihood of true errors being overlooked.

achpalaman avatar Dec 30 '24 09:12 achpalaman

got it - we should make sure this doesn't run if router is being run

krrishdholakia avatar Dec 30 '24 15:12 krrishdholakia

@krrishdholakia using the litellm SDK, we still see these errors pop up. It is quite noisy and unhelpful.

inference-1               | 
inference-1               | Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
inference-1               | LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
inference-1               | 

Dmarcotrigiano avatar Feb 23 '25 04:02 Dmarcotrigiano

Hey @Dmarcotrigiano do you have litellm.suppress_debug_info = True?

https://github.com/BerriAI/litellm/blob/d8d8a12b5d2e667fe3ab64b56400b4afed820d09/litellm/litellm_core_utils/exception_mapping_utils.py#L138

krrishdholakia avatar Feb 23 '25 04:02 krrishdholakia