langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Memory not supported with sources chain?

Open jordanparker6 opened this issue 1 year ago β€’ 14 comments

Memory doesn't seem to be supported when using the 'sources' chains. It appears to have issues writing multiple output keys.

Is there a work around to this?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[13], line 1
----> 1 chain({ "question": "Do we have any agreements with INGRAM MICRO." }, return_only_outputs=True)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118), in Chain.__call__(self, inputs, return_only_outputs)
    116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
--> 118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170), in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
    168 self._validate_outputs(outputs)
    169 if self.memory is not None:
--> 170     self.memory.save_context(inputs, outputs)
    171 if return_only_outputs:
    172     return outputs

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59), in ConversationSummaryBufferMemory.save_context(self, inputs, outputs)
     57 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
     58     """Save context from this conversation to buffer."""
---> 59     super().save_context(inputs, outputs)
     60     # Prune buffer if it exceeds max token limit
     61     buffer = self.chat_memory.messages

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37), in BaseChatMemory.save_context(self, inputs, outputs)
...
---> 37         raise ValueError(f"One output key expected, got {outputs.keys()}")
     38     output_key = list(outputs.keys())[0]
     39 else:

ValueError: One output key expected, got dict_keys(['answer', 'sources'])

jordanparker6 avatar Apr 01 '23 03:04 jordanparker6

+1

moraneden avatar Apr 01 '23 16:04 moraneden

I'm having the same problem when trying to use memory with RetrievalQAWithSourcesChain. Found and followed Langchain tutorial but nothing works:

https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html

mystvearn avatar Apr 02 '23 12:04 mystvearn

Having the same issue here, it would be really nice to have an example of how to get this to work.

pirtlj avatar Apr 05 '23 16:04 pirtlj

+1

VladoPortos avatar Apr 25 '23 13:04 VladoPortos

Receiving the same error when trying to use memory in RetrievalQAWithSourcesChain

deathblade287 avatar Apr 25 '23 16:04 deathblade287

same issue with ConversationalRetrievalChain

atc0m avatar May 14 '23 20:05 atc0m

+1

shdmitry2000 avatar May 15 '23 12:05 shdmitry2000

+1

gborgonovo avatar May 15 '23 22:05 gborgonovo

+1

Eliseowzy avatar May 16 '23 12:05 Eliseowzy

You can do this workaround for the time being It should be pretty safe and not break that piece when using it on other use cases (as in other chains), but don't know Langchain deep enough as to ensure it.

Edit lib/python3.10/site-packages/langchain/memory/chat_memory.py

Find this section:

class BaseChatMemory(BaseMemory, ABC):
    chat_memory: BaseChatMessageHistory = Field(default_factory=ChatMessageHistory)
    output_key: Optional[str] = None
    input_key: Optional[str] = None
    return_messages: bool = False

    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
        """Save context from this conversation to buffer."""
        if self.input_key is None:
            prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
        else:
            prompt_input_key = self.input_key
        if self.output_key is None:
            if len(outputs) != 1:
                raise ValueError(f"One output key expected, got {outputs.keys()}")
            output_key = list(outputs.keys())[0]
        else:
            output_key = self.output_key
        self.chat_memory.add_user_message(inputs[prompt_input_key])
        self.chat_memory.add_ai_message(outputs[output_key])

Change:

 if self.output_key is None:
            if len(outputs) != 1:
                raise ValueError(f"One output key expected, got {outputs.keys()}")
            output_key = list(outputs.keys())[0]

To:

      if self.output_key is None:
          if len(outputs) == 1:
              output_key = list(outputs.keys())[0]
          else:
              if "answer" in outputs.keys():
                  output_key = "answer"
              else:
                  raise ValueError(f"One output key expected, got {outputs.keys()}")

chiva avatar May 17 '23 19:05 chiva

Seems to be similar to https://github.com/hwchase17/langchain/issues/2068#issuecomment-1494537932

You probably have to define what your output_key actually is to get the chain to work

KEKL-KEKW avatar May 20 '23 19:05 KEKL-KEKW

i found the solution by reading the source code: memory = ConversationSummaryBufferMemory(llm=llm, input_key='question', output_key='answer')

ikebo avatar May 21 '23 14:05 ikebo

i found the solution by reading the source code: memory = ConversationSummaryBufferMemory(llm=llm, input_key='question', output_key='answer')

This works like a charm!

portkeys avatar May 23 '23 15:05 portkeys

Can confirm it is working for ConversationBufferMemory too.

memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', output_key='answer', return_messages=True)

Thanks a bunch!

cyberjj999 avatar May 27 '23 07:05 cyberjj999

Adding the output_key as above worked for me also.

dangarfield avatar May 31 '23 09:05 dangarfield

The ConversationalRetrievalChain adds a memory by default, shouldn't it also set the output_key for that memory if no memory was passed?

Seems strange allowing it to be instantiated without a memory and then not being able to run because a memory was not set up properly.

I'm not sure exactly where we could add that, though. Maybe here:https://github.com/hwchase17/langchain/blob/980c8651743b653f994ad6b97a27b0fa31ee92b4/langchain/chains/conversational_retrieval/base.py#L117) after we set the output we then set the output_key for the memory if it does not have one.

ogabrielluiz avatar Jun 23 '23 12:06 ogabrielluiz

Hello, @cyberjj999 , i am user a router chain with ConversationBufferMemory(), but when running the code, it doesn't seem that any information are being stored in the memory. Do you have any idea about router chain and memory?

Ali-Issa-aems avatar Jul 20 '23 16:07 Ali-Issa-aems

I tried using langchain.memory.ConversationBufferMemory() in RetreivalQAWithSourcesChain as:

qa = RetrievalQAWithSourcesChain(..., memory=ConversationBufferMemory(memory_key="history", input_key="query"))

I am able to achieve the output but followed by an error:

INFO:     127.0.0.1:63947 - "GET /extract/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 289, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/PDF-QA/main.py", line 40, in extract_file
    response = qa_chaining(qabuild, "What is the document about?")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/PDF-QA/_functions.py", line 71, in qa_chaining
    result = qa({"question": user_question}, return_only_outputs=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 118, in __call__
    return self.prep_outputs(inputs, outputs, return_only_outputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 170, in prep_outputs
    self.memory.save_context(inputs, outputs)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 34, in save_context
    input_str, output_str = self._get_input_output(inputs, outputs)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 26, in _get_input_output
    raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'sources', 'source_documents'])

What should be done in this case?

Shuhul24 avatar Jul 21 '23 10:07 Shuhul24

I'm having the same problems trying to use RetrievalQAWithSourcesChain with memory. Does anyone have a way in which it can be used?

User2345678910 avatar Jul 31 '23 10:07 User2345678910

Do the following:

  1. Create memory with input_key and output_key: memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")
  2. Initialize ConversationalRetrievalChain with memory: qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(max_tokens=512, model="gpt-3.5-turbo"), retriever=retriever, return_source_documents=True, memory=memory)
  3. Make a query to the QA using the input_key: qa({"question": prompt})

farbodnowzad avatar Aug 03 '23 23:08 farbodnowzad

In my side, I was trying to keep this two argument return_source_documents=True and return_generated_question=True. I've found a solution that works for me. In BaseChatMemory source code I delete two line with a raise function.

if len(outputs) != 1:
    raise ValueError(f"One output key expected, got {outputs.keys()}")

This allow me to conserve "source_documents and "generated_question" inside the output without breaking the code. So to change the source code you just have to run the code below.

import langchain
from typing import Dict, Any, Tuple
from langchain.memory.utils import get_prompt_input_key

def _get_input_output(
    self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> Tuple[str, str]:
    if self.input_key is None:
        prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
    else:
        prompt_input_key = self.input_key
    if self.output_key is None:
        output_key = list(outputs.keys())[0]
    else:
        output_key = self.output_key
    return inputs[prompt_input_key], outputs[output_key]
  
langchain.memory.chat_memory.BaseChatMemory._get_input_output = _get_input_output

Here, the original method : https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L11

JonaTri avatar Aug 04 '23 08:08 JonaTri

@JonaTri thank you very much it works for me! I think the fix should be merged to langchain

antonkulaga avatar Aug 26 '23 10:08 antonkulaga

anyone know how to get this to work with an Agent? got it to work as a standalone chain but still get lib/python3.9/site-packages/langchain/chains/base.py", line 133, in _chain_type raise NotImplementedError("Saving not supported for this chain type.") NotImplementedError: Saving not supported for this chain type.

bhaktatejas922 avatar Aug 27 '23 02:08 bhaktatejas922

+1

faisal-saddique avatar Sep 01 '23 03:09 faisal-saddique

with RetrievalQA.from_chain_type() you can use memory. To avoid ValueError: One output key expected, got dict_keys(['answer', 'sources']), you need to specify key values in memory function (e.g. ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key='query', output_key='result')). It would be noce to add this in official documentation, because it looks like it's not possible or you can do it only with ConversationalRetrievalChain.from_llm(). Issue can now be closed @hwchase17

reddiamond1234 avatar Sep 04 '23 12:09 reddiamond1234

I propose a solution.

"langchain/agents/agent.py" is the class from which all the extension chains mentioned above are derived.

    @property
    @abstractmethod
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
    @property
    def output_keys(self) -> List[str]:
        """Return the singular output key.

        :meta private:
        """
        if self.return_intermediate_steps:
            return self.agent.return_values + ["intermediate_steps"]
        else:
            return self.agent.return_values

All memory-related objects return a key that exists through the above method, but when passing these keys to the output parser, only the memory key is not passed, so the functions implemented in each agent are unnecessary depending on the purpose. Useless Key values ​​must be excluded.

like..

    @property
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
        return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})

The above source is

defintion of "Agent(BaseSingleActionAgent)"

Key values ​​to be excluded from the methods mentioned above are also accepted as arguments, so clear unification of input_key and output_key is necessary to prevent branching problems in each chain. The same method is already implemented differently in many chains, which continues to create errors in related chains.

YamonBot avatar Sep 22 '23 17:09 YamonBot

Hi, @jordanparker6

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you reported is related to memory not being supported when using the 'sources' chains, causing errors with writing multiple output keys. There have been discussions and suggestions in the comments regarding workarounds, modifying the source code, specifying key values in the memory function, and potential changes to the official documentation. However, the issue remains unresolved.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!

dosubot[bot] avatar Dec 22 '23 16:12 dosubot[bot]

Adding the output_key as above worked for me also.

Actually it would work for every type of memory object.

Ojasmodi avatar Mar 01 '24 18:03 Ojasmodi