QA chain is not working properly
Hello everyone,
I have implemented my project using the Question Answering over Docs example provided in the tutorial. I designed a long custom prompt using load_qa_chain with chain_type set to stuff mode. However, when I call the function "chain.run", the output is incomplete.
Does anyone know what might be causing this issue?
Is it because the token exceed the max size ?
llm=ChatOpenAI(streaming=True,callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),verbose=True,temperature=0,openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm,chain_type="stuff")
docs=docsearch.similarity_search(query,include_metadata=True,k=10)
r= chain.run(input_documents=docs, question=fq)
has the same problem,did you figure it out dude
I have the same problem. I think the documents are chunked properly. But this chain probably merges all the text from the docs all in one call.
Yup. I tested it. I only included docs and part of the last doc upto the content limit.
Check https://docs.langchain.com/docs/components/chains/index_related_chains
For stuff:
Cons: Most LLMs have a context length, and for large documents (or many documents) this will not work as it will result in a prompt larger than the context length.
Try other types like map_reduce might work
same problem here.
I get empty output even if i pass a single small document to the chain. Have tried also with map_reduce.
chain = load_qa_chain(local_llm, chain_type="map_reduce")
chain.run(input_documents=[docs[0]], question=query)
output:
Out[69]: ''
same question
Hi, @tron19920125! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, the issue you reported is related to the QA chain implemented using the Question Answering over Docs example. It seems that the output is incomplete, and there is a suspicion that it may be caused by exceeding the maximum token size. Some users have also suggested trying other chain types like map_reduce. Additionally, one user shared a link to check for potential limitations with large documents or multiple documents.
Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project. If you have any further questions or concerns, please don't hesitate to let us know.