langchainjs
langchainjs copied to clipboard
Added implementation for Azure OpenAI
I added an implementation for using langchain with Azure OpenAI. In generel, it would be sufficient to make the openAIApiKey
non mandatory by removing the check and throw for it, but I thought it makes sense to add some api to make it more obvious to setup the models to use Azure OpenAI.
I added it to the completion models, the embedding model and the chat-model.
All Azure parameters can also be set using envvars.
I also adjusted the docs accordingly. I haven't yet adjusted the unit-tests and I would like to adjust those, before this is merge.
I know that there is already a PR for Azure OpenAI, but I think it makes more sense to use the same models for Azure OpenAI and OpenAI's API, becuase maintanace would be easier this way and when new features are added, both profit from it. So this PR is not adding new models for Azure, but using the same models, by just extending the model with a few extra parameters and enforcing those, when the Azure OpenAI Api Key is set.
This implementation also throws, if both Api Keys openAIApiKey
and azureOpenAIApiKey
are missing.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Updated (UTC) |
---|---|---|---|
langchainjs-docs | ✅ Ready (Inspect) | Visit Preview | May 5, 2023 2:02pm |
I don't have access to Vercel, can please one of the maintainers look for the issues and let me know what to fix?
when will these be available in langchain package.. we are waiting for this one.
I am working on the tests. I think will have them done by tomorrow around this time.
Still hoping for some review though 😊
So I added a few test for the exports, but honestly the whole experience for running the tests is really a pain for me.
- I have problems compiling the project on windows, because node-gyp isn't finding the vc tools and I really don't know why.
- on wsl I can get it compiling and running the tests, but it takes like forever and so some test timeout because it is running really that slow
- I looked into adding integration test for azure, but I this would mean, that for the test to run on the github ci action, this project would need an azure api key (and deployment and instance etc) and I have no clue, if this project has access to it. Maybe our friends from MS can provide an instance running on Azure for this project?
- I would also like to add some examples to the repo, but again the problem would be with the api keys, anyhow I will add the examples in another PR
- so far I am done, but I want to run some last manual tests and will afterwards edit the PR name so it can be merged
I would really love for someone to please run the tests
ok, I got it all working with node-gyp and I was able to run all the tests and everything looks good. I also rebased from main.
@nfcampos could you please check and merge?
fixes #887 and #601
Hi @nfcampos, could you please review this PR and merge it? It is really useful. Thank you very much.
do we have an eta to enable this function in langchain?
Has this been tested on chat completion streaming? It seems Azure's response has a single blank line (compared to OpenAI's 2 blank lines) after data: [DONE]
so getMessages
in event-source-parse
never invokes onMessage
to send the [DONE]
event?
Hi folks, is one of you able to test this? I don't have access to an Azure model to test it with
I've made some small changes, looks good to me, pending someone being able to test this
I did run the streaming tests against azure and they looked good to me. But I'd be happy to run others tests, if someone has any specific tests in mind.
Cool, if you've tested I'm going to merge this now. Thanks a lot for your work on this
I see the following when trying to build. Were references moved on the latest build?
Error : node_modules/langchain/dist/llms/openai.d.ts:2:66 - error TS2307: Cannot find module 'types/open-ai-types.js' or its corresponding type declarations.
Original Call :
import { OpenAI } from "langchain/llms/openai";
@acshulk Thanks for reporting, this is fixed in 0.0.71
@nfcampos I'm on 0.0.71 and getting the error mentioned by @acshulk still. on both import { OpenAI } from "langchain/llms/openai" and import { OpenAIEmbeddings } from "langchain/embeddings/openai";.
node_modules/langchain/dist/embeddings/openai.d.ts(2,34): error TS2307: Cannot find module '../types/open-ai-types.js' or its corresponding type declarations. node_modules/langchain/dist/llms/openai-chat.d.ts(2,70): error TS2307: Cannot find module '../types/open-ai-types.js' or its corresponding type declarations. node_modules/langchain/dist/llms/openai.d.ts(2,66): error TS2307: Cannot find module '../types/open-ai-types.js' or its corresponding type declarations.
@bryantpa That is fixed in 0.0.72
Tested on 0.0.71, Azure OpenAI with stream = true, can't get conversation result returned. (hang in await).
Once set to false, or use OpenAI, it works.
Tested in 0.0.72, still same.
Using ChatOpenAI with gpt-4-32k
@baobo5625 sadly I do not have access to a model on azure, so im unable to test this personally. Perhaps @dersia can clarify if he tested that scenario?
I think it shouldn't be depend on model, 3.5 should also produce the same result.
const chatStreaming = new ChatOpenAI({
temperature: 0,
streaming: true,
azureOpenAIApiKey: azureOpenAIConfig.openApiKey,
azureOpenAIApiInstanceName: azureOpenAIConfig.accountName,
azureOpenAIApiDeploymentName: azureOpenAIConfig.deploymentName,
azureOpenAIApiVersion: '2023-03-15-preview',
});
const response = await chatStreaming.call([new HumanChatMessage(message)], undefined, [
{
handleLLMNewToken(token: string) {
logger.info(token);
},
},
]);
return response;
found the reason, the last message in azure streaming is:
{"id":"chatcmpl-7DxBV9AgR8GuvqLyrsmRtDav2M9zJ","object":"chat.completion.chunk","created":1683560125,"model":"gpt-4-32k","choices":[{"index":0,"finish_reason":"stop","delta":{}}],"usage":null}
it doesn't use the stop word [DONE]
I tested with my local axios implementation, azure openai does return the "[DONE]" message
Tried with Azure OpenAI vs OpenAI, the "onmessage" can capture [DONE] from openai, but not the azure one.
if not set streaming, then it works.
@baobo5625 I believe https://github.com/hwchase17/langchainjs/pull/1167 will fix your issue, any chance you could pull this locally and test it?
it could be one of the ways to fix it, however I tested out with my local implementation of Azure OpenAI in axios, it does return the [DONE] message.
I tested locally and it works.
I suggest to put a todo there to find out why