langchain icon indicating copy to clipboard operation
langchain copied to clipboard

langchain.embeddings.OpenAIEmbeddings is not working with AzureOpenAI

Open JonAtDocuWare opened this issue 2 years ago • 46 comments
trafficstars

When using the AzureOpenAI LLM the OpenAIEmbeddings are not working. After reviewing source, I believe this is because the class does not accept any parameters other than an api_key. A "Model deployment name" parameter would be needed, since the model name alone is not enough to identify the engine. I did, however, find a workaround. If you name your deployment exactly "text-embedding-ada-002" then OpenAIEmbeddings will work.

edit: my workaround is working on version .088, but not the current version.

JonAtDocuWare avatar Mar 09 '23 16:03 JonAtDocuWare

I'm having the same issue. However, in my case, my deployment is also called text-embedding-ada-002 too but it's failing with the following error:

InvalidRequestError: Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.

I printed the name of the model right before the call was done in the source code and it's correct, so no sure what's happening

aajn88 avatar Mar 11 '23 13:03 aajn88

Actually, by reading the error again, it seems like it's not related to the name of the model and azure just throttling.

So, in order to configure langchain embeddings properly, I have to replace this part with my desired chunk_size:

embeddings = embedding.embed_documents(texts, 1)

Of course, ideally this should be sent by parameter

aajn88 avatar Mar 11 '23 14:03 aajn88

I ran into the same issue with the chunk_size and Embeddings in Azure OpenAI Services and provided a fix. The easiest way is to initialize your OpenAIEmbeddings with chunk_size=1 - it works in other helper functions too when you can't pass the chunk_size e.g. .from_documents()...

embeddings = OpenAIEmbeddings(chunk_size=1)

For the deployment name, it should work with names != text-embedding-ada-002:

embeddings = OpenAIEmbeddings(engine=<your deployment name>)

floleuerer avatar Mar 12 '23 11:03 floleuerer

@floleuerer wait, there's no AzureOpenAIEmbeddings, is there? Were you referring to AzureOpenAI instead?

aajn88 avatar Mar 12 '23 11:03 aajn88

Ah sorry my bad! it' s OpenAIEmbeddings of course.

floleuerer avatar Mar 12 '23 11:03 floleuerer

Thanks for confirming. Got excited to think they had a version for Azure! Although it doesn't make a big difference as it's just a config matter.

I just got to my laptop and tried. It fails with the following message:

pydantic.error_wrappers.ValidationError: 1 validation error for OpenAIEmbeddings
chunk_size
  extra fields not permitted (type=value_error.extra)

Maybe I have the wrong version? I forked version 0.0.100 - could you confirm yours?

aajn88 avatar Mar 12 '23 11:03 aajn88

The fix should be in 0.0.106.

floleuerer avatar Mar 12 '23 11:03 floleuerer

Gotcha! I'll try to fork the new version this week.

aajn88 avatar Mar 12 '23 11:03 aajn88

Is this issue fixed? Am on 0.0.107 version, but still getting below error with trying to use OpenAIEmbeddings with the azure openai service. Tried both FAISS as well as Pinecone vector stores. from_documents() fails with openai.error.InvalidRequestError: Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions

deb007 avatar Mar 12 '23 14:03 deb007

I ran into the same issue with the chunk_size and Embeddings in Azure OpenAI Services and provided a fix. The easiest way is to initialize your OpenAIEmbeddings with chunk_size=1 - it works in other helper functions too when you can't pass the chunk_size e.g. .from_documents()...

embeddings = OpenAIEmbeddings(chunk_size=1)

For the deployment name, it should work with names != text-embedding-ada-002:

embeddings = OpenAIEmbeddings(engine=<your deployment name>)

@deb007 did you try @floleuerer solution?

aajn88 avatar Mar 12 '23 14:03 aajn88

I ran into the same issue with the chunk_size and Embeddings in Azure OpenAI Services and provided a fix. The easiest way is to initialize your OpenAIEmbeddings with chunk_size=1 - it works in other helper functions too when you can't pass the chunk_size e.g. .from_documents()...

embeddings = OpenAIEmbeddings(chunk_size=1)

For the deployment name, it should work with names != text-embedding-ada-002:

embeddings = OpenAIEmbeddings(engine=<your deployment name>)

@deb007 did you try @floleuerer solution?

Yes, I tried OpenAIEmbeddings(document_model_name="<deployment_name>", chunk_size=1) - it gives error - openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

OpenAIEmbeddings(document_model_name="<deployment_name>") gives error - openai.error.InvalidRequestError: Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.

OpenAIEmbeddings(engine="<deployment_name>") gives below error: extra fields not permitted (type=value_error.extra)

deb007 avatar Mar 12 '23 16:03 deb007

Actually, by reading the error again, it seems like it's not related to the name of the model and azure just throttling.

So, in order to configure langchain embeddings properly, I have to replace this part with my desired chunk_size:

embeddings = embedding.embed_documents(texts, 1)

Of course, ideally this should be sent by parameter

@deb007 this 👆 worked for me.

aajn88 avatar Mar 12 '23 17:03 aajn88

embeddings = OpenAIEmbeddings(chunk_size=1)

This also worked for me. Thanks!

However, what are the implications of setting chunk size to 1?

tomasfernandez1212 avatar Mar 15 '23 19:03 tomasfernandez1212

Unfortunately, I have tasted all the above methods and they have not worked, including deployment_name is set to text-davinci-003... Or change the version. I'm confused about openai.api_base and openai.api_type doesn't work. When I comment out the code, I find that there is no difference, and I still report an error as follows:

openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

https://github.com/hwchase17/langchain/issues/1560#issue-1617563090 This solves my problem

Huyueeer avatar Mar 16 '23 01:03 Huyueeer

embeddings = OpenAIEmbeddings(chunk_size=1)

This also worked for me. Thanks!

However, what are the implications of setting chunk size to 1?

I think latency (costs should be the same as it’s based on tokens and not API calls). If I’m not mistaken, you can add multiple texts in one call. I remember reading this somewhere but I couldn’t find it in their docs anymore. However, the error seems align with what I’m saying.

aajn88 avatar Mar 16 '23 08:03 aajn88

I'm on langchain-0.0.117 and as long as I use OpenAIEmbeddings() without any parameters, it works smoothly with Azure OpenAI Service, but requires the model deployment to be named text-embedding-ada-002.

csiebler avatar Mar 21 '23 12:03 csiebler

@floleuerer @aajn88

Setting the chunk_size explicity while initializing the OpenAIEmbeddings() embeds worked for me. Setting the deployment name explicitly did not help to overcome the max_input error.

langchain v=0.0.117 (latest)

hr1sh avatar Mar 21 '23 13:03 hr1sh

@floleuerer @aajn88

Setting the chunk_size explicity while initializing the OpenAIEmbeddings() embeds worked for me. Setting the deployment name explicitly did not help to overcome the max_input error.

langchain v=0.0.117 (latest)

Agreed, I was not able to get it to work with setting the model name manually.

csiebler avatar Mar 21 '23 13:03 csiebler

I'm on langchain=0.0.119 but OpenAIEmbeddings() throws an AuthenticationError: Incorrect API key provided... it seems that it tries to authenticate through the OpenAI API instead of the AzureOpenAI service, even when I configured the OPENAI_API_TYPE and OPENAI_API_BASE previously. Does anyone have the same problem?... tried with version 0.0.117 but the problem persists

germanpinzon807 avatar Mar 22 '23 20:03 germanpinzon807

I'm on langchain=0.0.119 but OpenAIEmbeddings() throws an AuthenticationError: Incorrect API key provided... it seems that it tries to authenticate through the OpenAI API instead of the AzureOpenAI service, even when I configured the OPENAI_API_TYPE and OPENAI_API_BASE previously. Does anyone have the same problem?... tried with version 0.0.117 but the problem persists

For some reason, the suggested implementation on the documentation (by setting environmental variables) does not work. However, a workaround is before openAIEmbedding is ever called, import openai package and set parameters manually eg:

import openai
openai.api_base = "www.x.com/" 
openai.api_type = 'azure'
openai.api_version = "2022-12-01" 
# optionally, set key

then

OpenAIEmbeddings(document_model_name="MODEL_NAME", chunk_size=1)

Granine avatar Mar 23 '23 19:03 Granine

A straight forward yet messy solution seems to be updating validate_enviroment validator function in langchain/embeddings/openai.py by adding a check-and-fix. As this is where openai seem to be first provoked.

Granine avatar Mar 23 '23 19:03 Granine

I'm suffering from these same issues, although my error message is a bit different (likely since I'm using LangChain v0.0.123:

2023-03-25 19:47:29.827 INFO    openai: error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2022-12-01 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 5 seconds. Please contact Azure support service if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False
2023-03-25 19:47:29.831 WARNING langchain.embeddings.openai: Retrying langchain.embeddings.openai.embed_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: Requests to the Embeddings_Create Operation under Azure OpenAI API version 2022-12-01 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 5 seconds. Please contact Azure support service if you would like to further increase the default rate limit..

It seems to me like the wait time between retries (which have clearly been handled in multiple places in LangChain) may not be the issues so much as the number of async workers sent to Azure in the space of a minute (since the limit is something like 300 per minute for embeddings). Does anyone know where in the LangChain code the number of max async workers is defined such that we could throttle it down to avoid this error perhaps?

emigre459 avatar Mar 25 '23 23:03 emigre459

I'm suffering from these same issues, although my error message is a bit different (likely since I'm using LangChain v0.0.123:

2023-03-25 19:47:29.827 INFO    openai: error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2022-12-01 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 5 seconds. Please contact Azure support service if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False
2023-03-25 19:47:29.831 WARNING langchain.embeddings.openai: Retrying langchain.embeddings.openai.embed_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: Requests to the Embeddings_Create Operation under Azure OpenAI API version 2022-12-01 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 5 seconds. Please contact Azure support service if you would like to further increase the default rate limit..

It seems to me like the wait time between retries (which have clearly been handled in multiple places in LangChain) may not be the issues so much as the number of async workers sent to Azure in the space of a minute (since the limit is something like 300 per minute for embeddings). Does anyone know where in the LangChain code the number of max async workers is defined such that we could throttle it down to avoid this error perhaps?

Don't really think this is the same issue, Maybe open a new issue page? I did encounter the same problem tho. My novel solution is to add a sleep inside the langchain library source code (I do not require high performance). I did not find a usage or performance limiter available to use.

Granine avatar Mar 26 '23 00:03 Granine

Apologies, I copied the wrong error (this is a related one, but caused when you try to use embeddings in a vector store using llama-index). My actual LangChain-only error is Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False

@Granine where did you put the sleep call (at the top of the openai.py module perhaps; or maybe in the embedding loop)? I'm (currently) fine with something like that, but it seems like that just repeats the retry logic doesn't it? But if it works for you, I'm willing to give it a whirl!

emigre459 avatar Mar 26 '23 00:03 emigre459

I'm on langchain=0.0.119 but OpenAIEmbeddings() throws an AuthenticationError: Incorrect API key provided... it seems that it tries to authenticate through the OpenAI API instead of the AzureOpenAI service, even when I configured the OPENAI_API_TYPE and OPENAI_API_BASE previously. Does anyone have the same problem?... tried with version 0.0.117 but the problem persists

For some reason, the suggested implementation on the documentation (by setting environmental variables) does not work. However, a workaround is before openAIEmbedding is ever called, import openai package and set parameters manually eg:

import openai
openai.api_base = "www.x.com/" 
openai.api_type = 'azure'
openai.api_version = "2022-12-01" 
# optionally, set key

then

OpenAIEmbeddings(document_model_name="MODEL_NAME", chunk_size=1)

I'm on langchain=0.0.119 but OpenAIEmbeddings() throws an AuthenticationError: Incorrect API key provided... it seems that it tries to authenticate through the OpenAI API instead of the AzureOpenAI service, even when I configured the OPENAI_API_TYPE and OPENAI_API_BASE previously. Does anyone have the same problem?... tried with version 0.0.117 but the problem persists

For some reason, the suggested implementation on the documentation (by setting environmental variables) does not work. However, a workaround is before openAIEmbedding is ever called, import openai package and set parameters manually eg:

import openai
openai.api_base = "www.x.com/" 
openai.api_type = 'azure'
openai.api_version = "2022-12-01" 
# optionally, set key

then

OpenAIEmbeddings(document_model_name="MODEL_NAME", chunk_size=1)

it worked! thank you! @Granine

germanpinzon807 avatar Mar 28 '23 15:03 germanpinzon807

I've spent the last 5 hours or so trying to work this ❗💲 #️⃣ ⚡er out, which has certainly put a damper on my hackathon spirits :)

I'm using:

  • Openai => 0.27.2
  • langchain => 0.0.127

and I was just trying to following the simple indexing getting started guide

Setting chunk_size=1 against the OpenAIEmbeddings initializer is indeed one part of the solve, at least until Azure opens up more chunks. It seems they have this as a filter in the router at your deployment level (or a part of the model ingestion handler) as you'll get this error The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. with or without chunk_size=1, but you'll only get Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. if you end up on a valid embedding model.

Embedding model is the key point here, if you try to use a model that outputs text rather than embedding vectors you'll get openai.error.InvalidRequestError: The embeddings operation does not work with the specified model, text-davinci-002. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.

So actually, if you are receiving Too many inputs for model None. The max number of inputs is 1. ... then it's a sign your deployment is set up correctly in Azure and you're actually hitting it. Just put in chunk_size=1 and you'll hit our final boss: openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>

As a brief tangent, the easiest way to actually get this to work up until this point is to just make a deployment called text-embedding-ada-002 in Azure and deploy the model with the same name to it, then you're work with the langchain defaults as of the time of writing. If you want to use another model then pass in OpenAIEmbeddings(model="<your embeddings deployment name>"), (note not model_name nor document_model_name/query_model_name unless you specifically want different models for upsert vs query, just model. See here) Hopefully MS adds a front door model attribute => model router in the near future so things are API compatible with the openAI API. Now, I digress. Onto M. Bison

In the particular case of the getting started guide I was using, the actual problem was that things were not being set up correctly to bring in a text LLM for the LLMChain that is behind the operations. I confirmed this by adding "engine":"text-davinci-003" on the line below this (as well as having text-davinci-003 deployed to Azure with a deployment of the same name), which made everything work.

To 'properly' fix this you need to do as follows (assuming you have the right Azure env vars in .env):

from dotenv import load_dotenv
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.llms import OpenAI

load_dotenv()

# Instantiate a Langchain OpenAI class, but give it a default engine
llm = OpenAI(model_kwargs={'engine':'text-davinci-003'})

# Load docs
docs = DirectoryLoader(os.path.join(os.path.dirname(__file__), 'my_docs_directory'))

# Sort out the chunking issue
embedding_model = OpenAIEmbeddings(chunk_size=1)

# Load things into your index, this all works fine so long as you do the above. Things are auto instantiated to use Azure correctly, so long as you have a deployment of text-embedding-ada-002 with the same model in your Azure instance
index = VectorstoreIndexCreator(embedding=embedding_model).from_loaders([my_files])

# This is where the fix is. You can pass in the pre-instantiated LLM with the default text model set, so your LLMChain behind the query action actually works
index.query(ym_query, llm=llm)

But as you can see, this is case by case. I've poked a lot but I don't see a way to set the text LLM model against the high level abstractions (i.e. VectorstoreIndexCreator) in a way that propagates. Hopefully it's there and I'm just missing it.

So the easy solution is.... well, there's no easy solution. The problem is actually the default instantiation of the Azure based openAI LLM takes into account that it's running on Azure, but doesn't have nice defaults or ways to pass through that you need to change the engine as well/instead of the model. Really, this is just a huge pain for the library writers due to the decision to have 'deployments' on Azure and not adhere to the model attribute like the OpenAI API

EAYoshi avatar Mar 31 '23 00:03 EAYoshi

@EAYoshi I've been trying to play around with the code here: https://github.com/alphasecio/llama-index/blob/main/index.py to get it working on Azure, changed it to use from langchain.llms import AzureOpenAI but not having any luck. Getting the same error as you, tried setting chunk size to 1 but all I get is this:

error_code=DeploymentNotFound error_message='The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.' error_param=None error_type=None message='OpenAI API error received' stream_error=False

Do you think this is related?

LewisLebentz avatar Apr 03 '23 21:04 LewisLebentz

Likely, yes.

First, make sure these environment variables are set correctly in the context of your code

OPENAI_API_BASE=<azure OpenAI base URL without deployment>
OPENAI_API_KEY=<azure key>
OPENAI_API_TYPE=azure

Then, make sure you have an Azure deployment named text-davinci-003 that is set to use the text-davinci-003 model

Finally, try changing this line to be

llm_predictor = LLMPredictor(llm=OpenAI(model_kwargs={'engine':'text-davinci-003'}))

then play around with the temperature once it works, I'm not sure where that parameter should go

EAYoshi avatar Apr 04 '23 21:04 EAYoshi

This issue is actually a bug in Azure, as the address to access is end_point/deployments/[Model deployment name]/embedding. The system failed to retrieve the name of the model deployment and instead retrieved the model name, which is text-embedding-ada-002. If your model deployment name is different from the model name, it will result in a "not found" error. Therefore, the solution is to rename your model deployment to text-embedding-ada-002. Happy Coding.

licesun avatar Apr 06 '23 08:04 licesun

I've been stuck all day and this thread solved my problems. Thank everyone who contributed to this thread!

jazzpujols34 avatar Apr 06 '23 09:04 jazzpujols34