OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

azureopenai chat model suppert

Open huanghe1986 opened this issue 1 year ago • 1 comments

Describe your question

I want to know whether openDevin supports the chat mode of azureopenai and how to configure it?

Additional context

My config.toml configuration is LLM_MODEL="gpt35-16k" LLM_API_KEY="xxxxxxxxx" LLM_EMBEDDING_MODEL="azureopenai" LLM_BASE_URL="https://abc-01.openai.azure.com/" LLM_DEPLOYMENT_NAME="gpt35-16k" LLM_API_VERSION="2024-02-01" WORKSPACE_BASE="/opt/opendevin/workspace"

After running, the error is reported as follows:

  1. Client error '401 Unauthorized' for url 'https://abc-01.openai.azure.com/openai/deployments/gpt35-16k/embeddings?api-version=2024-02-01'

  2. llama_index.embeddings.openai.utils:Retrying llama_index.embeddings.openai.base.get_embeddings in 1.2440917154413778 seconds as it raised AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}.

I use postman with the same configuration but different URLs using the chart mode call can return normal results. https://abc-01.openai.azure.com/openai/deployments/gpt35-16k/chat/completions?api-version=2024-02-01

huanghe1986 avatar Apr 23 '24 13:04 huanghe1986

Hi there! First, it seems you're not using the latest version, that doesn't have to be a problem, just to know: are you using 0.3.1?

What happens here is that there are two models, one for chat, and one for embeddings. I think in the config you show they get a bit mixed up. The two uses don't depend on each other.

  1. One way to make it work is like this: https://github.com/OpenDevin/OpenDevin/blob/ded0a762aa019ad2bdc1317131de54d71ad40657/docs/documentation/AZURE_LLM_GUIDE.md

Notice that the deployment variable refers to an embedding model deployment, not a chat model.

  1. Another, simpler way is to make it work for chat, and then see about the other one:
  • set LLM_EMBEDDING_MODEL = "local"
  • see if things work this way
  • then try to make it work for embeddings on Azure (which isn't strictly necessary, but it could be nice of course)

enyst avatar Apr 23 '24 13:04 enyst

Sounds like this one is solved--OP let us know if you continue to have trouble!

rbren avatar Apr 25 '24 19:04 rbren