fix: Issue where CodeAct agent was trying to log cost on local llm and throwing Undefined Model execption out of litellm
Fix this error introduced in https://github.com/OpenDevin/OpenDevin/pull/1612 when using local model.
@rbren pls review this pr, thanks.
Thanks for the fix!
Instead of this implementation, what if we changed the
completion_costfunction in llm.py to return0.00when working with local models?
Is there a better way than checking that 'localhost' not in self.base_url: to know if the model is local?
Ah I didn't realize we were just exposing the completion_cost function directly from LiteLLM.
What I'd suggest is adding a new function to the LLM class in llm.py:
def completion_cost(self, prompt):
try:
return litellm.completion_cost(prompt)
except Exception:
return 0
I added the localhost check so that there was at least a warning in the log, if not a local model, so that we aren't logging $0 even if there is a remote cost but an unsupported model in litellm's costing module
Codecov Report
Attention: Patch coverage is 75.00000% with 6 lines in your changes are missing coverage. Please review.
:exclamation: No coverage uploaded for pull request base (
main@0cf94a2). Click here to learn what that means.
| Files | Patch % | Lines |
|---|---|---|
| opendevin/llm/llm.py | 66.66% | 6 Missing :warning: |
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #1666 +/- ##
=======================================
Coverage ? 62.82%
=======================================
Files ? 96
Lines ? 3857
Branches ? 0
=======================================
Hits ? 2423
Misses ? 1434
Partials ? 0
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thanks for this!
Error is still there unfortunately, atleast for me:
21:55:24 - opendevin:ERROR: agent_controller.py:150 - Error in loop
Traceback (most recent call last):
File "/app/opendevin/controller/agent_controller.py", line 145, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 261, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agenthub/codeact_agent/codeact_agent.py", line 231, in step
cur_cost = completion_cost(completion_response=response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 4453, in completion_cost
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 4439, in completion_cost
) = cost_per_token(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 4254, in cost_per_token
raise litellm.exceptions.NotFoundError( # type: ignore
litellm.exceptions.NotFoundError: Model not in model_prices_and_context_window.json. You passed model=ollama/llama3:8b-instruct-q8_0. Register pricing for model - https://docs.litellm.ai/docs/proxy/custom_pricing
21:55:24 - opendevin:INFO: agent_controller.py:193 - Setting agent state from AgentState.RUNNING to AgentState.ERROR
I am running latest commit
@simplesisu Which commit?
File "/app/agenthub/codeact_agent/codeact_agent.py", line 231, in step
This line is changed in this PR.