OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2

Open katmai opened this issue 1 year ago • 7 comments

Describe the bug

It doesn't look like the config.toml options are being respected. i am trying to run this config and yet it errors out complaining about me setting llama2, which i didn't:

Setup and configuration

Current version:

commit 2855959c767feaea2b5e90dc76f93e996606d0e9 (HEAD -> main, origin/main, origin/HEAD)

My config.toml and environment vars (be sure to redact API keys):

LLM_API_KEY="ollama"
LLM_MODEL="ollama/dolphin-mixtral:latest"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_DIR="./workspace"

My model and agent (you can see these settings in the UI):

  • Model:
  • Agent: Monologue

Commands I ran to install and run OpenDevin:

Steps to Reproduce: 1. 2. 3.

Logs, error messages, and screenshots:

INFO:     Started server process [2024491]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:3000 (Press CTRL+C to quit)
INFO:     ('127.0.0.1', 36840) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI1MzA0YTEwNC0xYTZkLTQ4MGUtYWI3MS1hNWU2YTVlM2NjNWMifQ.HSE1_FbnGQPzndeWSURFLcbvDMc7N2VI-0QJ-SpYNFQ" [accepted]
Starting loop_recv for sid: 5304a104-1a6d-480e-ab71-a5e6a5e3cc5c, False
INFO:     connection open

Provider List: https://docs.litellm.ai/docs/providers

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/home/atlas/OpenDevin/opendevin/server/listen.py", line 40, in websocket_endpoint
    await session_manager.loop_recv(sid, agent_manager.dispatch)
  File "/home/atlas/OpenDevin/opendevin/server/session/manager.py", line 37, in loop_recv
    await self._sessions[sid].loop_recv(dispatch)
  File "/home/atlas/OpenDevin/opendevin/server/session/session.py", line 33, in loop_recv
    await dispatch(action, data)
  File "/home/atlas/OpenDevin/opendevin/server/agent/manager.py", line 74, in dispatch
    await self.create_controller(data)
  File "/home/atlas/OpenDevin/opendevin/server/agent/manager.py", line 127, in create_controller
    llm = LLM(model=model, api_key=api_key, base_url=api_base)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/OpenDevin/opendevin/llm/llm.py", line 36, in __init__
    self._router = Router(
                   ^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/router.py", line 198, in __init__
    self.set_model_list(model_list)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/router.py", line 2075, in set_model_list
    ) = litellm.get_llm_provider(
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5759, in get_llm_provider
    raise e
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5746, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
INFO:     connection closed
INFO:     127.0.0.1:36848 - "GET /litellm-models HTTP/1.1" 200 OK
INFO:     127.0.0.1:36858 - "GET /litellm-agents HTTP/1.1" 200 OK
INFO:     127.0.0.1:36872 - "GET /messages/total HTTP/1.1" 200 OK
INFO:     127.0.0.1:36884 - "GET /litellm-agents HTTP/1.1" 200 OK
INFO:     127.0.0.1:36892 - "GET /messages/total HTTP/1.1" 200 OK
INFO:     127.0.0.1:36904 - "GET /litellm-models HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 36906) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI1MzA0YTEwNC0xYTZkLTQ4MGUtYWI3MS1hNWU2YTVlM2NjNWMifQ.HSE1_FbnGQPzndeWSURFLcbvDMc7N2VI-0QJ-SpYNFQ" [accepted]
Starting loop_recv for sid: 5304a104-1a6d-480e-ab71-a5e6a5e3cc5c, False
INFO:     connection open

Provider List: https://docs.litellm.ai/docs/providers

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/home/atlas/OpenDevin/opendevin/server/listen.py", line 40, in websocket_endpoint
    await session_manager.loop_recv(sid, agent_manager.dispatch)
  File "/home/atlas/OpenDevin/opendevin/server/session/manager.py", line 37, in loop_recv
    await self._sessions[sid].loop_recv(dispatch)
  File "/home/atlas/OpenDevin/opendevin/server/session/session.py", line 33, in loop_recv
    await dispatch(action, data)
  File "/home/atlas/OpenDevin/opendevin/server/agent/manager.py", line 74, in dispatch
    await self.create_controller(data)
  File "/home/atlas/OpenDevin/opendevin/server/agent/manager.py", line 127, in create_controller
    llm = LLM(model=model, api_key=api_key, base_url=api_base)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/OpenDevin/opendevin/llm/llm.py", line 36, in __init__
    self._router = Router(
                   ^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/router.py", line 198, in __init__
    self.set_model_list(model_list)
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/router.py", line 2075, in set_model_list
    ) = litellm.get_llm_provider(
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5759, in get_llm_provider
    raise e
  File "/home/atlas/.cache/pypoetry/virtualenvs/opendevin-WmySEtCI-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5746, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
INFO:     connection closed

Additional Context

katmai avatar Apr 05 '24 19:04 katmai

This is a strange one. Can anyone reproduce?

rbren avatar Apr 06 '24 00:04 rbren

Yes, I tried clearing poetry cache as well as using a new workspace but same results. Will check on litellm page for option and modify and report updates in sometime. This was working earlier as I still have a file created by OpenDevin a couple of days back.

config.toml : LLM_MODEL="gpt-4-0125-preview" LLM_API_KEY="sk-xxxx" LLM_EMBEDDING_MODEL="openai" WORKSPACE_DIR="./workspace3"

Starting backend... /Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/pydantic/_internal/fields.py:151: UserWarning: Field "model_id" has conflict with protected namespace "model".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = (). warnings.warn( /Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/pydantic/_internal/fields.py:151: UserWarning: Field "model_name" has conflict with protected namespace "model".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = (). warnings.warn( /Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/pydantic/_internal/fields.py:151: UserWarning: Field "model_info" has conflict with protected namespace "model".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = (). warnings.warn( INFO: Started server process [32348] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:3000 (Press CTRL+C to quit) INFO: ('127.0.0.1', 60756) - "WebSocket /ws?token=eyJhbxxxxxxxxxxxxxxxxxSq6-AF4" [accepted] Starting loop_recv for sid: 2ea8e955-20b1-41eb-8831-d1a4db1c31cb, False INFO: connection open

Provider List: https://docs.litellm.ai/docs/providers

ERROR: Exception in ASGI application Traceback (most recent call last): File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi result = await self.app(self.scope, self.asgi_receive, self.asgi_send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 151, in call await self.app(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/middleware/cors.py", line 77, in call await self.app(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 756, in call await self.middleware_stack(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 776, in app await route.handle(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 373, in handle await self.app(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 96, in app await wrap_app_handling_exceptions(app, session)(scope, receive, send) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 94, in app await func(session) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/fastapi/routing.py", line 348, in app await dependant.call(**values) File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/server/listen.py", line 42, in websocket_endpoint await session_manager.loop_recv(sid, agent_manager.dispatch) File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/server/session/manager.py", line 37, in loop_recv await self._sessions[sid].loop_recv(dispatch) File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/server/session/session.py", line 33, in loop_recv await dispatch(action, data) File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/server/agent/manager.py", line 73, in dispatch await self.create_controller(data) File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/server/agent/manager.py", line 126, in create_controller llm = LLM(model=model, api_key=api_key, base_url=api_base) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/prashantshukla/Desktop/DEVHOME/learning/PubbyWork/OpenDevin/opendevin/llm/llm.py", line 36, in init self._router = Router( ^^^^^^^ File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/litellm/router.py", line 309, in init self.set_model_list(model_list) File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/litellm/router.py", line 2176, in set_model_list deployment = self._add_deployment(deployment=deployment) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/litellm/router.py", line 2212, in _add_deployment ) = litellm.get_llm_provider( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/litellm/utils.py", line 5774, in get_llm_provider raise e File "/Users/prashantshukla/Library/Caches/pypoetry/virtualenvs/opendevin-uZZ2kltN-py3.12/lib/python3.12/site-packages/litellm/utils.py", line 5761, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=meta-llama/Llama-2-13b Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

imperiousprashant avatar Apr 06 '24 18:04 imperiousprashant

Does the problem persist if you choose gpt-4-0125-preview in the UI settings?

rbren avatar Apr 06 '24 18:04 rbren

That's the setup for local LLM outlined in the docs here:

https://github.com/OpenDevin/OpenDevin/blob/main/docs/documentation/LOCAL_LLM_GUIDE.md

katmai avatar Apr 06 '24 19:04 katmai

Hi I'm a very beginner to LLM in general and I would like to install OpenDevin with "ollama" as LLM Provider. I'm getting same bug as katmai and I followed the LOCAL_LLM_GUDE.md, my config.toml is not respected:

LLM_API_KEY="ollama" WORKSPACE_DIR="./workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/codellama:7b" LLM_EMBEDDING_MODEL="BAAI/bge-small-en-v1.5"

Could you help me asking for a fix? Thanks

ValerioPace avatar Apr 06 '24 23:04 ValerioPace

Add Git commit reference:

commit c0dfc851b9e9de352ad4adcb4752ef14c4ba9be2 (HEAD -> main, origin/main, origin/HEAD)

ValerioPace avatar Apr 06 '24 23:04 ValerioPace

Does the problem persist if you choose gpt-4-0125-preview in the UI settings?

When I changed from gpt4 to claude, I had to update BOTH config.toml AND the UI settings.

In the OPs case the problem with the UI, is it doesn't have ollama/mixtral in the dropdown.

Hope this helps

andrewparry avatar Apr 09 '24 21:04 andrewparry

for @katmai, in his config, he used ollama but the log shows as llama2 for @imperiousprashant in his config, he used gpt4 but the log shows as meta-llama/

Did you set LLM_MODEL in the environment variable?

https://github.com/OpenDevin/OpenDevin/pull/962#issuecomment-2048907647 @enyst Why not order of precedence as config.toml > env?

SmartManoj avatar Apr 13 '24 12:04 SmartManoj

@katmai @ValerioPace This issue is from last week. We had fixes for the UI overriding config.toml, and it works now according to newer reports. Ref: https://github.com/OpenDevin/OpenDevin/pull/863 fixing multiple reports of this.

I'll close this, please update the repo (git pull) and if you're experiencing other issues, feel free to open a new issue.

@SmartManoj You may want to see issues linked in the PR above, and there are more, though I'm not even sure how to look them up. There was also this much older one, was fixed, then something broke. https://github.com/OpenDevin/OpenDevin/issues/500 Cause: FE default model was overriding BE settings.

Re: your question. The reasoning I found most often in the python world is that the user should have the ability to override the app behavior without needing to mess with config files. That implies their env will be respected first, if the user has set it. FWIW,

  1. we had the opposite too in the past
  2. since we did env > config, I haven't yet seen a case when env was causing trouble, though I'm not able to keep up with reading everything
  3. I have seen cases when it could save the day

enyst avatar Apr 13 '24 14:04 enyst

for @katmai, in his config, he used ollama but the log shows as llama2 for @imperiousprashant in his config, he used gpt4 but the log shows as meta-llama/

Did you set LLM_MODEL in the environment variable?

#962 (comment) @enyst Why not order of precedence as config.toml > env?

i didn't set any env variables. just config.toml

katmai avatar Apr 13 '24 15:04 katmai

Getting an below error while using other model apart from gpt like through groq or even through ollama

BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model-llama3-70b-8192

Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/ starcoder',..)

roopeshkrokade avatar Sep 24 '24 19:09 roopeshkrokade

@roopeshkr Maybe the model name is not complete? Can you please take a look this: https://docs.all-hands.dev/modules/usage/llms/groq and enter both provider and model name in the UI. If you still experience problems, please make a new issue.

enyst avatar Sep 24 '24 21:09 enyst