[Bug]: Invalid response object from openai compatible service providers
Is there an existing issue for the same bug? (If one exists, thumbs up or comment on the issue instead).
- [x] I have checked the existing issues.
Describe the bug and reproduction steps
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/openhands/controller/agent_controller.py", line 305, in _step_with_exception_handling
await self._step()
File "/app/openhands/controller/agent_controller.py", line 827, in _step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/openhands/agenthub/codeact_agent/codeact_agent.py", line 191, in step
response = self.llm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 338, in wrapped_f
return copy(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 477, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 378, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 400, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 480, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/app/openhands/llm/llm.py", line 282, in wrapper
resp: ModelResponse = self._completion_unwrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1281, in wrapper
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1159, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 3225, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2232, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 467, in exception_type
raise APIError(
litellm.exceptions.APIError: litellm.APIError: APIError: OpenAIException - Invalid response object Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/llm_response_utils/convert_dict_to_response.py", line 458, in convert_to_model_response_object
assert response_object["choices"] is not None and isinstance(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
received_args={'response_object': {'id': None, 'choices': None, 'created': None, 'model': None, 'object': None, 'service_tier': None, 'system_fingerprint': None, 'usage': None, 'code': 20000, 'msg': 'Unknown error []'}, 'model_response_object': ModelResponse(id='chatcmpl-a0e566c2-f626-4cd4-bd0d-26b7d4b30202', created=1748237676, model=None, object='chat.completion', system_fingerprint=None, choices=[Choices(finish_reason='stop', index=0, message=Message(content=None, role='assistant', tool_calls=None, function_call=None, provider_specific_fields=None))], usage=Usage(completion_tokens=0, prompt_tokens=0, total_tokens=0, completion_tokens_details=None, prompt_tokens_details=None)), 'response_type': 'completion', 'stream': False, 'start_time': None, 'end_time': None, 'hidden_params': None, '_response_headers': {'date': 'Mon, 26 May 2025 05:34:41 GMT', 'content-type': 'application/json', 'content-length': '39', 'connection': 'keep-alive', 'traceresponse': '00-2505261334360a059985c5e35e46c0e9-576599d1064c5914-01', 'strict-transport-security': 'max-age=31536000; includeSubDomains'}, 'convert_tool_call_to_json_mode': None}
05:34:41 - openhands:INFO: agent_controller.py:591 - [Agent Controller 7b1f60cfafde45f6847dece31d1a77da] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
05:34:41 - openhands:INFO: session.py:253 - Agent status error
05:34:41 - openhands:INFO: agent_controller.py:591 - [Agent Controller 7b1f60cfafde45f6847dece31d1a77da] Setting agent(CodeActAgent) state from AgentState.ERROR to AgentState.ERROR
05:34:41 - openhands:INFO: session.py:312 - Agent status error
OpenHands Installation
Docker command in README
OpenHands Version
No response
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response
Hi there. Can you please mention what LLM you are using?
Yes, as Mamoodi said, your LLM and provider seem to respond in a way that maybe is not, actually, compatible.
It could be useful to disable tool use, if you had it enabled.
Thank you! I will notice them about the issue.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
This issue was closed because it has been stalled for over 30 days with no activity.