langchain icon indicating copy to clipboard operation
langchain copied to clipboard

QA chain is not working properly

Open tron19920125 opened this issue 2 years ago • 3 comments

Hello everyone, I have implemented my project using the Question Answering over Docs example provided in the tutorial. I designed a long custom prompt using load_qa_chain with chain_type set to stuff mode. However, when I call the function "chain.run", the output is incomplete. Does anyone know what might be causing this issue? Is it because the token exceed the max size ? llm=ChatOpenAI(streaming=True,callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),verbose=True,temperature=0,openai_api_key=OPENAI_API_KEY) chain = load_qa_chain(llm,chain_type="stuff") docs=docsearch.similarity_search(query,include_metadata=True,k=10) r= chain.run(input_documents=docs, question=fq)

tron19920125 avatar Apr 23 '23 03:04 tron19920125

has the same problem,did you figure it out dude

godemerge avatar May 15 '23 08:05 godemerge

I have the same problem. I think the documents are chunked properly. But this chain probably merges all the text from the docs all in one call.

nikunjy avatar May 23 '23 04:05 nikunjy

Yup. I tested it. I only included docs and part of the last doc upto the content limit.

nikunjy avatar May 23 '23 04:05 nikunjy

Check https://docs.langchain.com/docs/components/chains/index_related_chains

For stuff:

Cons: Most LLMs have a context length, and for large documents (or many documents) this will not work as it will result in a prompt larger than the context length.

Try other types like map_reduce might work

chenyang-zheng avatar May 30 '23 06:05 chenyang-zheng

same problem here.

I get empty output even if i pass a single small document to the chain. Have tried also with map_reduce.

chain = load_qa_chain(local_llm, chain_type="map_reduce")
chain.run(input_documents=[docs[0]], question=query)

output:

Out[69]: ''

pybergonz avatar Jun 21 '23 14:06 pybergonz

same question

luoqingming110 avatar Aug 23 '23 04:08 luoqingming110

Hi, @tron19920125! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding, the issue you reported is related to the QA chain implemented using the Question Answering over Docs example. It seems that the output is incomplete, and there is a suspicion that it may be caused by exceeding the maximum token size. Some users have also suggested trying other chain types like map_reduce. Additionally, one user shared a link to check for potential limitations with large documents or multiple documents.

Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project. If you have any further questions or concerns, please don't hesitate to let us know.

dosubot[bot] avatar Nov 22 '23 16:11 dosubot[bot]