OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

UnsupportedProtocol Error when prompting with openai llm model in Windows wsl environment

Open wmitnaj opened this issue 1 year ago • 3 comments

Describe the bug

When testing OpenDevin with OpenAI , every prompt request produces an OpenAIException which is raised by an UnsupportedProtocol Error telling me that a Request URL is missing an 'http://' or 'https://' protocol. Did I miss something when setting up the config? Mine looks pretty similar to others that I have seen.

Setup and configuration

Running OpenDevin in a Windows 11 wsl Ubuntu environment.

Current version:

707ab7b3f84fb5664ff63da0b52e7b0d2e4df545

My config.toml and environment vars (be sure to redact API keys):

LLM_MODEL="gpt-3.5-turbo-0125"
LLM_API_KEY="s...."
LLM_EMBEDDING_MODEL="openai"
WORKSPACE_DIR="./workspace"

My model and agent (you can see these settings in the UI):

  • Model: gpt-3.5-turbo-0125
  • Agent: MonologueAgent

Commands I ran to install and run OpenDevin:

git clone https://github.com/OpenDevin/OpenDevin.git
cd OpenDevin
sudo make build
sudo make setup-config
sudo make run

Steps to Reproduce:

  1. setting gpt-3.5-turbo-0125 as model and openai for embeddings
  2. make run
  3. Prompting the model

Logs, error messages, and screenshots:

ERROR:
OpenAIException - Traceback (most recent call last):
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 167, in handle_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 931, in _request
    response = self._client.send(
               ^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/root/.cache/pypoetry/virtualenvs/opendevin-w_6MAHcD-py3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

Additional Context

The opendevin server is running fine, so I guess my setup seems to be correct. Maybe I am missing something obvious. I suppose I dont need the LLM_BASE_URL config parameter when using openai?

wmitnaj avatar Apr 10 '24 07:04 wmitnaj

You're right, I don't have it set either and it works.

Can you please try the suggestion in this comment? https://github.com/OpenDevin/OpenDevin/issues/908#issuecomment-2046517745

enyst avatar Apr 10 '24 08:04 enyst

You're right, I don't have it set either and it works.

Can you please try the suggestion in this comment? #908 (comment)

I tried that, didnt help unfortunately. I tried this:

https://github.com/OpenDevin/OpenDevin/issues/908#issuecomment-2046184211

Which then leads to this when doing "make run":

 File "/root/.cache/pypoetry/virtualenvs/opendevin-4oYb6y1w-py3.11/lib/python3.11/site-packages/litellm/router.py", line 199, in __init__
    self.set_model_list(model_list)
  File "/root/.cache/pypoetry/virtualenvs/opendevin-4oYb6y1w-py3.11/lib/python3.11/site-packages/litellm/router.py", line 2091, in set_model_list
    deployment = self._add_deployment(deployment=deployment)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-4oYb6y1w-py3.11/lib/python3.11/site-packages/litellm/router.py", line 2127, in _add_deployment
    ) = litellm.get_llm_provider(
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.cache/pypoetry/virtualenvs/opendevin-4oYb6y1w-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5850, in get_llm_provider
    raise e
  File "/root/.cache/pypoetry/virtualenvs/opendevin-4oYb6y1w-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 5837, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama-7b-chat
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Which is kinda weird because llama-7b-chat was something I used in the beginning but my current config still looks like this:

LLM_MODEL="gpt-3.5-turbo-1106" LLM_API_KEY="s...." LLM_EMBEDDING_MODEL="openai" WORKSPACE_DIR="./workspace"

So maybe I have some problems with the poetry cache? I tried to clear the cache with poetry cache clear --all . but it says nothing is there to clear. How can I make sure that every cache or anything related to old stuff is cleaned before I start the make run again?

wmitnaj avatar Apr 10 '24 09:04 wmitnaj

Aha, that's exactly the problem: your installation is sending stale information out there, but it's not poetry. We can fix it.

I'm going to reiterate some, but also complete here what I said in other threads on this. So the idea is simple: latest main branch now should work, but we need to get rid of leftover stale information on your environment.

  • stop opendevin backend, frontend, close localhost tabs
  • clear local storage in your browser, or use another browser you haven't used with opendevin in the last few days
  • git pull
  • delete ./cache folder in ./opendevin (important)
  • make build
  • make start-backend, allow it to finish until it says uvicorn ready or so
  • make start-frontend
  • open localhost:3001

enyst avatar Apr 10 '24 18:04 enyst

Should be fixed with the new docker installation method!

rbren avatar Apr 15 '24 15:04 rbren