langchain icon indicating copy to clipboard operation
langchain copied to clipboard

A monkey patch demo to use `memory` with `history` in LCEL

Open 954-Ivory opened this issue 1 year ago β€’ 4 comments

It goes something like this: This is why BaseChatMemory.chat_memory doesn't pruning https://github.com/langchain-ai/langchain/issues/14957#issuecomment-1907951114

So I made some mokey patches to fix the problem temporarily. And the following demo implements to use history and memory in LCEL. Because there are too many modules involved, I wanted core contributors to help me refine this idea.

from typing import Union, Any, Optional
from langchain.memory.chat_memory import BaseChatMemory
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.load import load
from langchain_core.messages import BaseMessage
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.runnables.history import (
    RunnableWithMessageHistory,
    MessagesOrDictWithMessages,
    GetSessionHistoryCallable
)
from langchain_core.tracers.schemas import Run

from langchain.memory import ConversationSummaryBufferMemory
from langchain_core.prompts import MessagesPlaceholder, ChatPromptTemplate
from langchain_openai import ChatOpenAI


class MessageList(list):
    def __new__(cls, *args, history=None, **kwargs):
        cls.__history = history
        return super().__new__(cls)

    def pop(self, __index=-1):
        if __index == 0:
            self.__history.redis_client.rpop(self.__history.key)
        elif __index == -1:
            self.__history.redis_client.lpop(self.__history.key)
        else:
            raise IndexError("Redis doesn't support pop by index.")
        # pop the message from history storage.
        return super().pop(__index)


class RedisChatMessageHistoryFixed(RedisChatMessageHistory):

    @property
    def messages(self) -> MessageList[BaseMessage]:  # type: ignore
        messages = MessageList(super().messages, history=self)
        return messages


class RunnableWithMessageHistoryPlus(RunnableWithMessageHistory):
    memory: Optional[BaseChatMemory] = None

    def __init__(
            self,
            runnable: Runnable[
                MessagesOrDictWithMessages,
                Union[str, BaseMessage, MessagesOrDictWithMessages]
            ],
            get_session_history: GetSessionHistoryCallable,
            memory: BaseChatMemory = None,
            **kwargs: Any
    ):
        super().__init__(runnable, get_session_history, **kwargs)
        if memory:
            self.memory = memory
            self.memory.input_key = self.input_messages_key
            self.memory.output_key = self.output_messages_key

    def _exit_history(self, run: Run, config: RunnableConfig) -> None:
        hist = config["configurable"]["message_history"]
        # Get the input messages
        inputs = load(run.inputs)
        input_val = inputs[self.input_messages_key or "input"]
        input_messages = self._get_input_messages(input_val)

        # If historic messages were prepended to the input messages, remove them to
        # avoid adding duplicate messages to history.
        if not self.history_messages_key:
            historic_messages = config["configurable"]["message_history"].messages
            input_messages = input_messages[len(historic_messages):]

        # Get the output messages
        output_val = load(run.outputs)
        output_messages = self._get_output_messages(output_val)

        # Monkey patch
        messages = zip(input_messages, output_messages)
        self.memory.chat_memory = hist
        #   `BaseChatMemory.save_context()` will call `add_message()` and `prune()`.
        #   `RunnableWithMessageHistory` just call the `add_message()`.
        for i, o in messages:
            self.memory.save_context(
                {self.input_messages_key or 'input': i.content},
                {self.output_messages_key or 'output': o.content}
            )

        #   Some of the 'BaseChatMemory' pruning features are in `load_memory_variables()`,
        #   such as `ConversationSummaryBufferMemory`.
        #   So we should extract the `messages` from 'load_memory_variables()'.
        messages = self.memory.load_memory_variables({})[self.history_messages_key].copy()

        #   Replace the `messages` in `BaseChatMessageHistory` with those in `messages(RAM)`.
        #   Of course, this is only a temporary practice, and we should not operate with storage so often.
        self.memory.chat_memory.clear()
        for memory in messages:
            hist.add_message(memory)


if __name__ == '__main__':
    REDIS_URL = ...
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 'You are a helpful assistant.'),
            MessagesPlaceholder(variable_name="history"),
            ("human", "{question}"),
        ]
    )

    model = ChatOpenAI(
        model="gpt-3.5-turbo",
    )
    chain = prompt | model
    chain_with_history = RunnableWithMessageHistoryPlus(
        chain,
        lambda session_id: RedisChatMessageHistoryFixed(session_id, url=REDIS_URL),
        memory=ConversationSummaryBufferMemory(
            llm=model,
            memory_key="history",
            return_messages=True,
            max_token_limit=2000
        ),
        input_messages_key="question",
        history_messages_key="history",
    )


    def chat(question):
        res = chain_with_history.stream(
            {"question": question},
            config={"configurable": {"session_id": 'test'}},
        )
        for message in res:
            print(message.content, end='')


    while _question := input('human:'):
        chat(_question)
        print()

954-Ivory avatar Jan 24 '24 18:01 954-Ivory

πŸ€–

Hello @954-Ivory! πŸš€

I'm Dosu, a friendly bot here to lend a hand. I can help you troubleshoot bugs, answer questions, and navigate the path to becoming a contributor. While we're waiting for a human maintainer, I'm here to assist you.

Just wanted to let you know I'm reviewing your issue on the memory and history implementation in LCEL. Expect a more comprehensive response soon!

dosubot[bot] avatar Jan 24 '24 18:01 dosubot[bot]

πŸ€–

Based on the information provided, it seems you're looking for a way to use memory with history in the LangChain Code Execution Layer (LCEL) via monkey patching. Monkey patching is a technique to modify or extend the behavior of code at runtime without altering its source code.

In the context of LangChain, you might want to monkey patch to customize the behavior of memory or history classes for your specific use case. For instance, you might want to change how the memory maintains the state of the Chain or how the history stores the chat message history.

Here's a simple example of how you might monkey patch a method in the ZepMemory class:

from langchain.memory.zep_memory import ZepMemory

def new_method(self, *args, **kwargs):
    # Your custom implementation here
    pass

ZepMemory.some_method = new_method

In this example, some_method is a hypothetical method in the ZepMemory class that you want to override with your own implementation new_method.

Please note that monkey patching should be used with caution as it can lead to unexpected behavior if not done correctly. It's also important to note that changes made by monkey patching are global and will affect all instances of the patched class.

As for your question about the disappearance of your description, it seems like there might have been a technical glitch. Could you please provide more details about what exactly disappeared and where it was supposed to appear? This will help me assist you better.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Jan 24 '24 18:01 dosubot[bot]

Hello! I'm wondering if you could provide higher level context? Is the goal to make it easier to use existing memory abstractions with RunnableWithMessageHistory?

eyurtsev avatar Jan 24 '24 19:01 eyurtsev

Hello! I'm wondering if you could provide higher level context? Is the goal to make it easier to use existing memory abstractions with RunnableWithMessageHistory?

Maybe we need to fix this bug(https://github.com/langchain-ai/langchain/issues/14957#issuecomment-1907951114) first.

  1. We might need to add a pop_message function to BaseChatMessageHistory.

  2. And add the pop_message implementation of all BaseChatMessageHistory subclasses. (This is the tricky part for me)

  3. Then replace the call of chat_memory.message.pop(0) in all BaseChatMemory subclasses.

Then we can continue to RunnableWithMessageHistory related content.

954-Ivory avatar Jan 24 '24 19:01 954-Ivory