langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Is there no chain for question answer with sources and memory?

Open shreyabhadwal opened this issue 2 years ago • 3 comments

I have tried using memory inside load_qa_with_sources_chain but it throws up an error. Works fine with load_qa_chain. No other way to do this other than creating a custom chain?

shreyabhadwal avatar Feb 23 '23 09:02 shreyabhadwal

I have accomplished what you are trying to do using an conversational agent and providing a qa with sources chain as a tool to that agent. It's very similar to this example in the help docs: https://langchain.readthedocs.io/en/latest/modules/memory/examples/conversational_agent.html

Tools are pretty easy to define, so if you already have a working qa chain you should be able to adapt examples here: https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_tools.html

brandco avatar Feb 23 '23 16:02 brandco

When I am trying to do this, my agent is never referring to the sources I am giving in my QA with sources tool. How did you pass your input documents? I think I am doing it wrong,

Here's what I am trying to do

chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce")
chain({"input_documents": search_index.similarity_search(question1, k=4),"question": question1,}, return_only_outputs=True)

tools = [Tool(name = 'QASourcesChain' ,func = chain.run, description = "use to answer every question")]

memory = ConversationBufferMemory(memory_key="chat_history")

agent = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True,
                         memory=memory)

shreyabhadwal avatar Feb 24 '23 09:02 shreyabhadwal

can confirm not getting a source output with load_qa_with_sources_chain, though I can see the source metadata in my documents.

loader = DirectoryLoader('knowledge/', glob="*.md")
knowledge = loader.load()
embeddings = HuggingFaceEmbeddings()
docsearch = Chroma.from_documents(knowledge, embeddings, persist_directory="chroma-db",metadatas=[{"source": str(i)} for i in range(len(knowledge))])
llm = HuggingFacePipeline(pipeline=pipe)
docs = docsearch.similarity_search(query)
chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
result = chain({"input_documents": docs, "question": query}, return_only_outputs=False)

outputs

{ 'output_text': ' Re-use existing images'}

chris-aeviator avatar Mar 08 '23 10:03 chris-aeviator

I had the same problem. It worked when I used a custom prompt. This is possibly because the default prompt of load_qa_chain is different from load_qa_with_sources_chain. Here's an example you could try:

template = """You are an AI chatbot having a conversation with a human. Given the following extracted parts of a long document and a question, create a final answer.  
ALWAYS return a "SOURCES" part in your answer.
The "SOURCES" part should be a reference to the sources in the documents from which you got your answer.
Example of your response should be:

---
The answer is foo

SOURCES: 
- xyz
---

=====BEGIN DOCUMENT=====
{summaries}
=====END DOCUMENT=====

=====BEGIN CONVERSATION=====
{chat_history}
Human: {human_input}
AI:"""

prompt = PromptTemplate(
   input_variables=["chat_history", "human_input", "summaries"],
   template=template
)

memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)

faz-cxr avatar Mar 29 '23 23:03 faz-cxr

I was able to accomplish what I wanted using the ConversationalRetrievalChain.

According to the documentation itself, "The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions."

shreyabhadwal avatar Mar 30 '23 03:03 shreyabhadwal