langchain
langchain copied to clipboard
Support concurrent usage of OpenAI API and Azure OpenAI
I would like to make requests to both Azure OpenAI and the OpenAI API in my app using the AzureChatOpenAI
and ChatOpenAI
classes respectively.
The issue I'm running into is it seems both classes depend on the same environment variables/global OpenAI variables (openai.api_key
, openai.api_type
, etc). For example, if I create an AzureChatOpenAI
instance, the variables will be set to Azure config, and this will cause any subsequent OpenAI calls to fail.
I also have two instances of Azure OpenAI that I want to hit (e.g. I have text-davinci-003 running in EU and gpt-3.5-turbo running in US as gpt-3.5-turbo isn't supported in EU yet), so it would be nice if I could have separate instances of AzureChatOpenAI
with different configs.
A workaround is to set these environment variables manually before every call, which AzureChatOpenAI
kind of does, but this seems susceptible to race conditions if concurrent requests are made to my app since these variables aren't directly passed into the request and there's no locking mechanism.
Would it be possible to have multiple instances of these classes and not have these instances obscurely share state? Or is this just a limitation of the way OpenAI's python package is setup?
Thank you!
@kjwong could you not just add the right parameters when you instantiate the AzureChatOpenAI class? You can hardcode the parameters or load them from your own environment variables.
BASE_URL = "https://${TODO}.openai.azure.com"
API_KEY = "..."
DEPLOYMENT_NAME = "chat"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type = "azure",
)
See https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html
@iMicknl Yes you're right, that's what I'm doing! However, AzureChatOpenAI updates the openai
global values under the hood, like so:
import openai
azure_chat_model = AzureChatOpenAI(
openai_api_base=azure_url_us,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME_US,
openai_api_key=AZURE_API_KEY_US,
openai_api_type = "azure",
)
print(openai.api_base) # this will equal azure_url_us
azure_gpt3_model = AzureOpenAI(
openai_api_base=azure_url_eu,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME_EU,
openai_api_key=AZURE_API_KEY_EU,
openai_api_type = "azure",
)
print(openai.api_base) # this will equal azure_url_eu
openai_chat_model = ChatOpenAI(openai_api_key = OPENAI_API_KEY)
# do A with Azure GPT-3
azure_gpt3_model.generate(...)
# do B with Azure Chat - this fails because our chat model is deployed in the US and api_base is pointing to EU
azure_chat_model.generate(...)
# do C with OpenAI Chat - this fails because api_base still pointing to EU
openai_chat_model.generate(...)
# Ideally ChatOpenAI would work like AzureChatOpenAI and allow us to pass in params but it doesn't so we set them manually
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = None
openai.api_key = OPENAI_API_KEY
# this works now that we've reset the openai params
openai_chat_model.generate(...)
I think there are bound to be race conditions in a prod environment with concurrent requests since these classes are updating the same global values, which may lead to the incorrect model being called at runtime. Our app is latency sensitive so we wouldn't be able to lock these values on each request.
Another issue is that after instantiating an AzureOpenAI
llm and those variables get set in the openai
module, if we want to then instantiate a OpenAI
llm it won't set the api_type
, api_base
and api_version
back to the openai
defaults.
This happens cause the BaseOpenAI
object only accepts, validates and sets the api_key
Also the AzureOpenAI
model don't support these keys as well. Only the chat version (AzureChatOpenAI
) does
Any update on this? Is there a way to set multiple environments for different models, right now it is overwriting as mentioned in the above comments.
Not sure if everyone noticed it but this should now be solved since https://github.com/hwchase17/langchain/pull/5792 🎉
Hi, @kjwong! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue you reported was regarding conflicts between the AzureChatOpenAI
and ChatOpenAI
classes due to them depending on the same environment variables. There was a suggestion to have separate instances of AzureChatOpenAI
with different configurations to avoid this conflict. Additionally, it was mentioned that the AzureOpenAI
model doesn't support certain keys and that the BaseOpenAI
object doesn't reset some attributes when instantiating an OpenAI
model.
However, it seems that the issue has been resolved in a recent pull request.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository!