langchain icon indicating copy to clipboard operation
langchain copied to clipboard

openai:error_code=None error_message='Too many inputs for model None. The max number of inputs is 1.

Open ShubhamVerma16 opened this issue 1 year ago • 7 comments

while using llama_index GPTSimpleVectorIndex I am reading a pdf file using SimpleDirectoryReader.

I am unable to create index for the file and it is generating the below error:

INFO:openai:error_code=None error_message='Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False

The code works for some files and fails for others with the above error.

Please suggest what does it mean by Too many inputs for model that comes as a error only for some files.

ShubhamVerma16 avatar Mar 28 '23 14:03 ShubhamVerma16

can you provide code snippets

ghost avatar Mar 28 '23 15:03 ghost

Please find the below code snippet

max_input_size = 500 num_output = 48 max_chunk_overlap = 20

llm = AzureOpenAI(deployment_name=engine) llm_predictor = LLMPredictor(llm=llm)

prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)

documents = SimpleDirectoryReader('data').load_data()

index = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper)

response = index.query('query')

ShubhamVerma16 avatar Mar 28 '23 15:03 ShubhamVerma16

I think this is the issue of llama index lib

ghost avatar Mar 29 '23 14:03 ghost

Hi, I face the same error when I'm using only langchain modules.

Related issues in llama-index:

  • https://github.com/jerryjliu/llama_index/issues/947
  • https://github.com/jerryjliu/llama_index/issues/823

Code snippet:

from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader

documents_folder = "text.txt"

loader = TextLoader(documents_folder)
documents = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)

llm = AzureChatOpenAI(
    deployment_name="gpt-35-turbo-0301",
    temperature=0,
)

embeddings = OpenAIEmbeddings(
    deployment="text-embedding-ada-002",
)

vectorstore = Chroma.from_documents(documents, embeddings)
qa = RetrievalQA.from_chain_type(
    llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever()
)

query = "How many vacation days do I have?"
print(qa.run(query))

ddxgz avatar Apr 22 '23 09:04 ddxgz

Good afternoon, I am hitting the same. Any updates?

alisalih1 avatar May 05 '23 22:05 alisalih1

We currently do not support batching of embeddings into a single API call. If you receive the error InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon., this typically occurs when an array of embeddings is attempted to be passed as a batch rather than a single string. The string can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.

ref:https://learn.microsoft.com/en-us/Azure/cognitive-services/openai/reference#embeddings

hanguofeng avatar May 14 '23 13:05 hanguofeng

same as https://github.com/hwchase17/langchain/issues/1560, I believe this issue should be closed.

Imccccc avatar May 22 '23 12:05 Imccccc

Hi, I face the same error when I'm using only langchain modules.

Related issues in llama-index:

Code snippet:

from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader

documents_folder = "text.txt"

loader = TextLoader(documents_folder)
documents = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)

llm = AzureChatOpenAI(
    deployment_name="gpt-35-turbo-0301",
    temperature=0,
)

embeddings = OpenAIEmbeddings(
    deployment="text-embedding-ada-002",
)

vectorstore = Chroma.from_documents(documents, embeddings)
qa = RetrievalQA.from_chain_type(
    llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever()
)

query = "How many vacation days do I have?"
print(qa.run(query))

You are required to assign 1 to the chunk_size parameter in the OpenAIEmbeddings() function. By default, the chunk_size was assigned to 1000.

Meanwhile, the deployment parameter in the OpenAIEmbeddings() function is the deployment name of your model in Azure and the deployment_name parameter in the AzureChatOpenAI() function is the deployment name of your model in Azure as well. For example:

Azure OpenAI: image

Google Colab Code: image

mingjun1120 avatar May 31 '23 10:05 mingjun1120

Hi, @ShubhamVerma16! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding of the issue, you encountered an error with the llama_index GPTSimpleVectorIndex when creating an index for a PDF file. It seems that the error message indicates that there are too many inputs for the model, with a maximum of 1 input allowed. Some users have provided code snippets and suggested that this may be an issue with the llama index library. Another user mentioned that the error occurs when an array of embeddings is passed as a batch instead of a single string.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.

dosubot[bot] avatar Sep 21 '23 16:09 dosubot[bot]