OpenHands
OpenHands copied to clipboard
Deepseek-Chat or Deepseek-Coder
I am trying to use Deepseek API from the dropdown list within OpenDevin. I chose Deepseek-Coder, provide the API key but it fails. I tried manually adding Deepseep\ in front of model too but it failed. Same for Deepseek chat. What is the correct format for providing Deepseek Model name within the console.
06:40:54 - opendevin:ERROR: llm.py:164 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek/deepseek-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #5 | You can customize these settings in the configuration.
06:40:54 - opendevin:ERROR: agent_controller.py:147 - Error in loop
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 698, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 6400, in get_llm_provider
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 6387, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek/deepseek-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/opendevin/controller/agent_controller.py", line 142, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 256, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agenthub/codeact_agent/codeact_agent.py", line 223, in step
response = self.llm.completion(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 330, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 467, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 368, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 410, in exc_check
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 183, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 470, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/app/opendevin/llm/llm.py", line 188, in wrapper
resp = completion_unwrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 3222, in wrapper
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 3116, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2226, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 9233, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 9201, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek/deepseek-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
06:40:54 - opendevin:INFO: agent_controller.py:190 - Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR