langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Giving SystemMessage/Context to ConversationalRetrievalChain and ConversationBufferMemory

Open etkinhud-mvla opened this issue 1 year ago • 6 comments

I'm trying to build a chatbot that can chat about pdfs, and I got it working with memory using ConversationBufferMemory and ConversationalRetrievalChain like in this example. https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html

Now I'm trying to give the AI some special instructions to talk like a pirate (just for testing to see if it is receiving the instructions). I think this is meant to be a SystemMessage, or something with a prompt template? I've tried everything I have found, but all the examples in the documentation are for ConversationChain and I end up having problems with. So far the only thing that hasn't had any errors is this

template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
    input_variables=["chat_history", "question"], template=template
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), PROMPT, memory=memory, return_source_documents=True)

It still doesn't have any effect on the results, so I don't know if it is doing anything at all. I also think it's the wrong approach, and i should be using SystemMessages (maybe on the memory, not the qa), but nothing I try from the documentation works and I'm not sure what to do.

etkinhud-mvla avatar May 02 '23 05:05 etkinhud-mvla

I'm struggling with the same issue right now , i hope someone provides an answer

tarek-kerbedj avatar May 02 '23 14:05 tarek-kerbedj

Does anyone have any insight on this problem? I can't really continue without fixing it

etkinhud-mvla avatar May 07 '23 01:05 etkinhud-mvla

Same issue here. I've tried adding a prefix to the prompt but to no avail.

abhiamishra avatar May 11 '23 19:05 abhiamishra

One approach: Get result from ConversationChain and then pass into another chain with your specific personality to rephrase the answer? I'll give that a shot and see how that goes.

EDIT

The following prompt seems to work well:

Rewrite the following text: ____ as if spoken by ___ (you can add your adjectives, etc)

abhiamishra avatar May 11 '23 19:05 abhiamishra

Same issue here, looking for similar solution with systemMessagePrompt passed to ConversationalRetrievalQAChain.

onlistudio avatar Jul 02 '23 18:07 onlistudio

@ThatDevHuddy @abhiamishra @tarek-kerbedj @onlistudio Here y'all, switch up the prompts here and pass in a condense_question_prompt (or not), if needed. I tried to make this one chain setup/call as comprehensive as possible.

I went through a bunch of the library source, and figured out most of the customization:

    qa_system_template = """Answer the question using the following contexts.
    ----------------
    {context}"""
    messages = [
        SystemMessagePromptTemplate.from_template(qa_system_template),
        HumanMessagePromptTemplate.from_template("{question}"),
    ]
    qa_system_prompt = ChatPromptTemplate.from_messages(messages)
    qa = ConversationalRetrievalChain.from_llm(chat_model, vector_store.as_retriever(search_kwargs={
                                               "k": 8}), memory=memory, condense_question_prompt=condense_question_prompt,
                                                verbose=True, combine_docs_chain_kwargs={"prompt": qa_system_prompt})

Hope this helps! Might even put out a blog soon going through some intricacies/optimizations/how-tos since it looks like a LOT of people are having difficulty using Langchain well, and I've done the work so may as well!

ShantanuNair avatar Jul 11 '23 15:07 ShantanuNair

@ShantanuNair This is great but are you having any issues with chat history? I am using this setup

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0.7, model_name="gpt-4"), vectorstore.as_retriever(), memory=memory,  chain_type='stuff', condense_question_llm = None,
                                           combine_docs_chain_kwargs = {"prompt": qa_system_prompt})

However, it does not seem to remember anything in the chat. Do I have to pass {chat_history} explicitly in the prompt?

muhammadsr avatar Aug 20 '23 11:08 muhammadsr

Hi, @etkinhud-mvla! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue is about trying to give special instructions to a chatbot using SystemMessage or a prompt template. You and other users have tried various approaches, including using ConversationBufferMemory and ConversationalRetrievalChain, but none of them have had the desired effect. However, there are some potential resolutions that have been suggested. One user recommends using a specific prompt template that seems to work well, and another user provides a comprehensive setup/call example with customization options.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.

dosubot[bot] avatar Nov 19 '23 16:11 dosubot[bot]