langchain icon indicating copy to clipboard operation
langchain copied to clipboard

AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1.

Open Aspyryan opened this issue 1 year ago • 5 comments

System Info

Langchain version == 0.0.166 Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2 LLM = AzureOpenAI

Who can help?

@hwchase17 @agola11

Information

  • [ ] The official example notebooks/scripts
  • [X] My own modified scripts

Related Components

  • [ ] LLMs/Chat Models
  • [X] Embedding Models
  • [ ] Prompts / Prompt Templates / Prompt Selectors
  • [ ] Output Parsers
  • [ ] Document Loaders
  • [X] Vector Stores / Retrievers
  • [ ] Memory
  • [ ] Agents / Agent Executors
  • [ ] Tools / Toolkits
  • [ ] Chains
  • [ ] Callbacks/Tracing
  • [ ] Async

Reproduction

Steps to reproduce:

  1. Set up azure openai embeddings by providing key, version etc..
  2. Load a document with a loader
  3. Set up a text splitter so you get more then 2 documents
  4. add them to chromadb with .add_documents(List<Document>)

This is some example code:

pdf = PyPDFLoader(url)
documents = pdf.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()

Expected behavior

Embeddings be added to the database, instead it returns the error openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once. The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214 The input should be a 1 dimentional array and not multi.

Aspyryan avatar May 12 '23 12:05 Aspyryan

I might have mitigated the issue by adding the chunk size to the embeddings: embedding = OpenAIEmbeddings(deployment="embeddings",model="text-embedding-ada-002", chunk_size = 1)

Aspyryan avatar May 12 '23 12:05 Aspyryan

In the javscript version of langchain the parameter chunk_size is named batchSize

fastsyrup avatar May 26 '23 08:05 fastsyrup

yes ,on azure just embed one by one ,

["id1"],["meta1"],["doc1"]

wrong example :

["id1",""id2"],["meta1","meta2"],["doc1","doc2"]

you will get : Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon....... 😂

lingfengchencn avatar May 30 '23 17:05 lingfengchencn

Can anyone tell me how to fix this issue

tushaar9027 avatar Jun 12 '23 10:06 tushaar9027

I might have mitigated the issue by adding the chunk size to the embeddings: embedding = OpenAIEmbeddings(deployment="embeddings",model="text-embedding-ada-002", chunk_size = 1)

Why did you choose chunk_size = 1? And can you explain me why it does not work with the same chunk_size used in the text_splitter? Thanks in advance.

lucasandre22 avatar Jun 13 '23 23:06 lucasandre22

I might have mitigated the issue by adding the chunk size to the embeddings: embedding = OpenAIEmbeddings(deployment="embeddings",model="text-embedding-ada-002", chunk_size = 1)

Why did you choose chunk_size = 1? And can you explain me why it does not work with the same chunk_size used in the text_splitter? Thanks in advance.

I am not positive, but from my understanding, Azure only allows you to embed one string at a time. It will give you the error described above if you try to send more than one so we must limit our chunk_size to one if we are using Azure and have not increased our limit.

jeremiah-dibble avatar Jun 14 '23 19:06 jeremiah-dibble

Thank you, I understood the difference between the chunk_size in the embeddings and in the text_splitter, they work differently, since in the embeddings it refers to the chunks per batch.

lucasandre22 avatar Jun 14 '23 23:06 lucasandre22

after changing the chunk size to1 the rate limit error hapening.

PRAJINPRAKASH avatar Jun 22 '23 10:06 PRAJINPRAKASH

How does following code work?

pdf = PyPDFLoader(url)
documents = pdf.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

for text in texts:
    vectordb.add_documents([text])
vectordb.persist()

makoto-soracom avatar Jun 22 '23 22:06 makoto-soracom

@makoto-soracom it works! But I think it's not good, cus we will call vector db and azure api at same time. Just to fix azure restriction it waste resources to call vector db.

vbonluk avatar Jun 24 '23 19:06 vbonluk

with chunk_size = 1 I am getting an error when doing big embeddings

aiakubovich avatar Jul 03 '23 06:07 aiakubovich

I got several warnings related to the chunk being bigger than 1, but even with those warnings I was able to load and use the chunks document.

lucasandre22 avatar Jul 03 '23 16:07 lucasandre22

@lucasandre22 do you use Azure OpenAI APIs?

aiakubovich avatar Jul 03 '23 16:07 aiakubovich

yep, no issues besides those warnings.

lucasandre22 avatar Jul 03 '23 18:07 lucasandre22

I'm getting this error too, seems like azure not supporting too many arguments, so I switched to OpenAI's api. Now it works fine. I found this further link https://learn.microsoft.com/en-us/azure/cognitive-services/openai/faq#i-am-trying-to-use-embeddings-and-received-the-error--invalidrequesterror--too-many-inputs--the-max-number-of-inputs-is-1---how-do-i-fix-this-

kylooh avatar Jul 04 '23 06:07 kylooh

chunk_size = 1

with chunk_size = 1 I am getting an error when doing big embeddings

me too..

sunyq1995 avatar Jul 14 '23 14:07 sunyq1995

Which error are you getting?

lucasandre22 avatar Jul 14 '23 14:07 lucasandre22

Which error are you getting?

sorry for wrong reply, what I want to say is : after changing the chunk size to1 the rate limit error hapenning.

I just solve it by building embeding one by one, and build vector_store by using FAISS.from_embeddings() ways instead of FAISS.from_texts()

sunyq1995 avatar Jul 14 '23 15:07 sunyq1995

Not sure if @sunyq1995 means this, but this worked for me, and I think it was faster than doing from_texts

embeddings = OpenAIEmbeddings(
    deployment=embedding_deployment_id,
    model=embedding_model_name,
    chunk_size=1,
    max_retries=10,
    show_progress_bar=True,
)

loader = DataFrameLoader(
    data_df,
    page_content_column="text",
)
text_splitter = TokenTextSplitter(chunk_size=2_000, chunk_overlap=5)
documents = text_splitter.split_documents(loader.load())

returned_embeddings = embeddings.embed_documents(
    [doc.page_content for doc in documents],
)

docsearch = FAISS.from_embeddings(
    text_embeddings=[
        (doc.page_content, embedding)
        for doc, embedding in zip(documents, returned_embeddings)
    ], 
    embedding=embeddings,
    metadatas=[
        doc.metadata
        for doc in documents
    ],
)

marctorsoc avatar Jul 15 '23 09:07 marctorsoc

Not sure if @sunyq1995 means this, but this worked for me, and I think it was faster than doing from_texts

embeddings = OpenAIEmbeddings(
    deployment=embedding_deployment_id,
    model=embedding_model_name,
    chunk_size=1,
    max_retries=10,
    show_progress_bar=True,
)

loader = DataFrameLoader(
    data_df,
    page_content_column="text",
)
text_splitter = TokenTextSplitter(chunk_size=2_000, chunk_overlap=5)
documents = text_splitter.split_documents(loader.load())

returned_embeddings = embeddings.embed_documents(
    [doc.page_content for doc in documents],
)

docsearch = FAISS.from_embeddings(
    text_embeddings=[
        (doc.page_content, embedding)
        for doc, embedding in zip(documents, returned_embeddings)
    ], 
    embedding=embeddings,
    metadatas=[
        doc.metadata
        for doc in documents
    ],
)

Thanks @marctorsoc for clarify, that exactly what I mean, but I have a question is there any rate limit warning when you run below code?

returned_embeddings = embeddings.embed_documents(
    [doc.page_content for doc in documents],

sunyq1995 avatar Jul 17 '23 04:07 sunyq1995

It looks like the team increased the limit, a chunk_size of 16 works for me (deployed text-embedding-ada-002).

I've deployed my instance of Azure OpenAI to eastus (maybe quotas differ per Azure Region)

ThorstenHans avatar Jul 21 '23 06:07 ThorstenHans

I was able to pass in 16 documents at a time too without the max number of input error, however I had quite a few documents. I used a for loop which worked for me however I had to add time.sleep(2) otherwise I got a rate limit warning:

openai.error.RateLimitError: Requests to the Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. Operation under Azure OpenAI API version 2023-03-15

see this thread.

example code:

batch = 16
total_docs = len(all_docs)
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embeddings_model)

for i in range(0, total_docs, batch):
    sample_docs = mapping_docs[i:i + batch]
    vectordb.add_documents(sample_docs)
    time.sleep(2) # embarrassing but works
vectordb.persist()

kadereub avatar Jul 26 '23 18:07 kadereub

I have same error when using this VectorstoreIndexCreator with Azure OpenAPI. How can I set the max number of inputs ?

Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support...


from langchain.indexes import VectorstoreIndexCreator

file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)

index = VectorstoreIndexCreator(
    vectorstore_cls=DocArrayInMemorySearch,
    embedding=embeddings,
).from_loaders([loader])

huislaw avatar Aug 04 '23 09:08 huislaw

@huislaw here is my solution, use chunkify function to customize max number of inputs (max=16)

from langchain.document_loaders import WebBaseLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import AzureChatOpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
import logging
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.chains import RetrievalQA
import os
from typing import Iterable

logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)


def chunkify(arr: Iterable, size: int = 8):
    for i in range(0, len(arr), size):
        yield arr[i : i + size]


embedder = OpenAIEmbeddings(
    openai_api_key=os.getenv("OPENAI_EMBEDDING_API_KEY"),
    openai_api_base=os.getenv("OPENAI_EMBEDDING_API_BASE"),
    openai_api_version=os.getenv("OPENAI_EMBEDDING_API_VERSION"),
    openai_api_type=os.getenv("OPENAI_EMBEDDING_API_TYPE"),
    deployment=os.getenv("OPENAI_EMBEDDING_API_MODEL"),
)


chatllm = AzureChatOpenAI(
    openai_api_key=os.getenv("OPENAI_CHAT_API_KEY"),
    openai_api_base=os.getenv("OPENAI_CHAT_API_BASE"),
    openai_api_version=os.getenv("OPENAI_CHAT_API_VERSION"),
    openai_api_type=os.getenv("OPENAI_CHAT_API_TYPE"),
    deployment_name=os.getenv("OPENAI_CHAT_API_MODEL"),
    temperature=0,
)

with open("document_urls.txt", "r") as F:
    urls = F.read().split("\n")


loader = WebBaseLoader(web_path=urls)
data = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)

vectorstore = Chroma(embedding_function=embedder)
for chunk in chunkify(all_splits):
    vectorstore.add_documents(chunk)

retriever_from_llm = MultiQueryRetriever.from_llm(
    retriever=vectorstore.as_retriever(), llm=chatllm
)

qa_chain = RetrievalQA.from_chain_type(chatllm, retriever=retriever_from_llm)
result = qa_chain({"query": "How many versions are there in AAVE"})
print(result)

alan890104 avatar Aug 17 '23 08:08 alan890104

As of 8/5/23, the easiest fix is to pass in chunk_size=16 when creating OpenAIEmbeddings for an Azure deployment. Some of the other solutions here are more complicated than using this built-in functionality. As some have noted, the limit has been increased to 16 from 1.

Confusingly, this value is distinct from the chunk size for text splitting. Here, the configuration tells the OpenAIEmbeddings object to create 16 embeddings at a time, which conforms to the Azure limit. In the TypeScript version of langchain, the name of this configuration is is batchSize.

johnjensenish avatar Sep 05 '23 18:09 johnjensenish

Can you guys try the patch in #10707 ?

mspronesti avatar Sep 19 '23 11:09 mspronesti

chunk_size here in the Azure OpenAIEmbeddings() is referring to the number of embeddings it creates in parallel as opposed to the langchain chunk_size which is calculating the size of the chunk. chunk_size = 16 worked at this time.

shruti-z avatar Sep 27 '23 05:09 shruti-z

Adding chunk_size = 1 while creating embeddings worked for me when using Azure OpenAI API Key.

embeddings = AzureOpenAIEmbeddings( deployment=AZURE_OPENAI_DEPLOYMENT, openai_api_version=AZURE_OPENAI_VERSION,chunk_size = 1 )

jdeepak-4u avatar Feb 28 '24 03:02 jdeepak-4u