llmware icon indicating copy to clipboard operation
llmware copied to clipboard

Azure OpenAI Integration

Open thickmn opened this issue 1 year ago • 7 comments

Azure OpenAI is a common enterprise option for a lot of companies integrating LLMs into their environment. It's our only option for genAI, so any integration would be great!

thickmn avatar May 06 '24 13:05 thickmn

@thickmn - thanks - we are working on it!

doberst avatar May 07 '24 17:05 doberst

@thinkmn - we have merged a proposed fix into the main branch - could you check it out and confirm if this will meet your needs in terms of configuration of the AzureOpenAI client - please check out the example - appreciate your help and support to test and make sure it meets your needs!

doberst avatar May 07 '24 19:05 doberst

@doberst - thanks for the quick response here! My first go at testing looks like it tries to call openai with a user managed api key and can't successfully set the azure client with the environment variables in the example. I'm going to spend more time on this tomorrow, but just as an FYI, when using langchain's integration, I had to pass: OPENAI_API_TYPE=azure OPEN_API_BASE OPEN_API_KEY OPEN_API_VERSION DEPLOYMENT_NAME

thickmn avatar May 08 '24 14:05 thickmn

Hi, I had the same problem as thickmn, I also have to use the GPT throught azure, I try to use the sample in the link, but It doesn't work ...

My GPT is the 3.5-turbo However in the requisition I use AZURE_OPENAI_ENDPOINT AZURE_OPENAI_API_KEY api_version = "2023-07-01-preview"

and to call my model is test: response = client.chat.completions.create( model = "test", messages =....

@doberst can you help me, please?

Clima2024 avatar Jun 11 '24 17:06 Clima2024

@Clima2024 - happy to help with this - and sorry that you have run into an issue ... without sharing any confidential information, could you share the key 2-3 line code snippet, including the OpenAI configs... We will also test from our Azure account in parallel....

doberst avatar Jun 12 '24 13:06 doberst

Thanks @doberst .. no worries, so in my local env what I do is the following:

import os from openai import AzureOpenAI

#with the var registered in my environment, I do: client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version="2023-07-01-preview" )

#then I call GPT as:

response = client.chat.completions.create( model = "test", messages=[ {"role": "system", "content": "You are a helpful research assistant."}, {"role": "user", "content": "Just say hi"}] )

print(response.choices[0].message.content)

#This model name "test" is the name of the instalation but we call one gpt-35-turbo (in azure they don't use the dot)

Clima2024 avatar Jun 12 '24 14:06 Clima2024

Hi @doberst I am also trying to use the embeding (text-embedding-3-small) at my local azure openai account, and it failed. Not sure why my infra named the models, but here a use: model="test-embeddings" to do the embeding. As long as you are working with it, could you check the embeding calling for azure too? (if it is not ask too much.)

Clima2024 avatar Jun 13 '24 22:06 Clima2024