langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Why does ConversationalRetrievalChain rephrase every human question?

Open muhammadsr opened this issue 2 years ago • 7 comments

I think this is killing me. Literally!!. Why is it that the ConversationalRetrievalChain rephrase every question I ask it? Here is an example:

Example: Human: Hi AI: Hello! How may I assist you today?

Human: What activities do you recommend? AI Rephrasing Human Question: What are your top three activity recommendations? AI Response: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?

Human Sure AI Rephrasing Human Question: Which of those activities is your personal favorite? AI Response: As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.

As you can see here. The last message the human sends is sure. However, the rephrasing is just destroying the flow of this conversation. Can we disable this rephrasing?


More Verbose:

> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

> Finished chain.

answer1:  Hello! How may I assist you today?

> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Follow Up Input: What activities do you recommend?
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human: What are your top three activity recommendations?

> Finished chain.

> Finished chain.
time: 5.121097803115845
answer2:  As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?


> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Human: What activities do you recommend?
Assistant: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?
Follow Up Input: Sure
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...

> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human:  Which of those activities is your personal favorite?


> Finished chain.

> Finished chain.
answer3:  As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.

muhammadsr avatar May 04 '23 01:05 muhammadsr

It seems to me because we tell it to

https://github.com/hwchase17/langchain/blob/624554a43a1ab0113f3d79ebcbc9e726faecb339/langchain/chains/conversational_retrieval/prompts.py#L4

_template = """Given the following conversation and a follow up question, **rephrase** the follow up question to be a standalone question.

rick2047 avatar May 04 '23 11:05 rick2047

In fact, we define another template called QA_PROMPT in the same file, and never use it.

rick2047 avatar May 04 '23 11:05 rick2047

how to stop rephrasing questions ?

siddhantdante avatar May 17 '23 12:05 siddhantdante

I did it like this:

 custom_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Preserve the ogirinal question in the answer setiment during rephrasing.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
 CONDENSE_QUESTION_PROMPT_CUSTOM = PromptTemplate.from_template(custom_template)  

pass in condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM like below. Here PROMPT is my custom prompt and condense_question_prompt is the the above: qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectordb.as_retriever(search_kwargs={"k": 3,"search_type":"mmr"}),combine_docs_chain_kwargs={'prompt': PROMPT},verbose=True,condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM )

pratikthakkar avatar May 25 '23 15:05 pratikthakkar

I did it like this:

 custom_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Preserve the ogirinal question in the answer setiment during rephrasing.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
 CONDENSE_QUESTION_PROMPT_CUSTOM = PromptTemplate.from_template(custom_template)  

pass in condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM like below. Here PROMPT is my custom prompt and condense_question_prompt is the the above: qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectordb.as_retriever(search_kwargs={"k": 3,"search_type":"mmr"}),combine_docs_chain_kwargs={'prompt': PROMPT},verbose=True,condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM )

How did you handle context here?

charles-adedotun avatar Jun 02 '23 09:06 charles-adedotun

I just ended up using non memory qa chain. It's faster and rephrasing user input is just too unreliable

jasan-s avatar Jun 21 '23 08:06 jasan-s

I did it like this:

 custom_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Preserve the ogirinal question in the answer setiment during rephrasing.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
 CONDENSE_QUESTION_PROMPT_CUSTOM = PromptTemplate.from_template(custom_template)  

pass in condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM like below. Here PROMPT is my custom prompt and condense_question_prompt is the the above: qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectordb.as_retriever(search_kwargs={"k": 3,"search_type":"mmr"}),combine_docs_chain_kwargs={'prompt': PROMPT},verbose=True,condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM )

I tried the above suggestion where the condense_question_prompt_custom is along the lines of "Return the question back to me exactly as is". Not ideal but seemed like a work-around for now. Unfortunately this produces an error. Any help greatly appreciated. Here's some of my code:

query_string = "Tell me 5 more"

prompt_template = """ Instructions, blah blah blah. Context: {context} Chat History: {chat_history} Question: {question} More instructions: """

condense_question_template = """Return the exact same text back to me"""

prompt = PromptTemplate(template=prompt_template, input_variables=["context", "chat_history", "question"])

chain = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0.3, model = "gpt-3.5-turbo"), retriever = docsearch, verbose=True, condense_question_prompt = condense_question_template, combine_docs_chain_kwargs={"prompt" : prompt})

The chain works fine without the condense_question_prompt, and errors when introduced. Error for the last line shown:

ValidationError: 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict)

madeovmetal avatar Jul 06 '23 23:07 madeovmetal

Does anyone have any good solutions?

timxieICN avatar Jul 17 '23 21:07 timxieICN

Does anyone have any good solutions?

I wouldn't call this a 'good' solution, but it's a solution. I used Zep to store the chat history and manually pulled, formatted, and inserted it into the prompt before passing said prompt to a chain with RetrievalQA.

https://docs.getzep.com/deployment/quickstart

madeovmetal avatar Jul 17 '23 22:07 madeovmetal

@muhammadsr How did you solved this problem??

I think this is killing me. Literally!!. Why is it that the ConversationalRetrievalChain rephrase every question I ask it? Here is an example:

Example: Human: Hi AI: Hello! How may I assist you today?

Human: What activities do you recommend? AI Rephrasing Human Question: What are your top three activity recommendations? AI Response: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?

Human Sure AI Rephrasing Human Question: Which of those activities is your personal favorite? AI Response: As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.

As you can see here. The last message the human sends is sure. However, the rephrasing is just destroying the flow of this conversation. Can we disable this rephrasing?

More Verbose:

> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

> Finished chain.

answer1:  Hello! How may I assist you today?

> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Follow Up Input: What activities do you recommend?
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human: What are your top three activity recommendations?

> Finished chain.

> Finished chain.
time: 5.121097803115845
answer2:  As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?


> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Human: What activities do you recommend?
Assistant: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?
Follow Up Input: Sure
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...

> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human:  Which of those activities is your personal favorite?


> Finished chain.

> Finished chain.
answer3:  As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.

iiitmahesh avatar Aug 24 '23 14:08 iiitmahesh

@muhammadsr How did you solved this problem??

I think this is killing me. Literally!!. Why is it that the ConversationalRetrievalChain rephrase every question I ask it? Here is an example: Example: Human: Hi AI: Hello! How may I assist you today? Human: What activities do you recommend? AI Rephrasing Human Question: What are your top three activity recommendations? AI Response: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities? Human Sure AI Rephrasing Human Question: Which of those activities is your personal favorite? AI Response: As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions. As you can see here. The last message the human sends is sure. However, the rephrasing is just destroying the flow of this conversation. Can we disable this rephrasing? More Verbose:

> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

> Finished chain.

answer1:  Hello! How may I assist you today?

> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Follow Up Input: What activities do you recommend?
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human: What are your top three activity recommendations?

> Finished chain.

> Finished chain.
time: 5.121097803115845
answer2:  As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?


> Entering new LLMChain chain...
Prompt after formatting:
Given the following conversation and a follow up question.

Chat History:

Human: Hi
Assistant: Hello! How may I assist you today?
Human: What activities do you recommend?
Assistant: As an AI language model, I don't have personal preferences. However, based on the information provided, the top three choices are running, swimming, and hiking. Do you need any more info on these activities?
Follow Up Input: Sure
Standalone question:

> Finished chain.


> Entering new StuffDocumentsChain chain...

> Entering new LLMChain chain...
Prompt after formatting:
System: Use the following pieces of context to answer the users question. 
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------

Human:  Which of those activities is your personal favorite?


> Finished chain.

> Finished chain.
answer3:  As an AI language model, I don't have the capability to have a preference. However, I can provide you with more information about the activities if you have any questions.

I resolved this by manually storing and handling the chat history in a database (can use something like Zep if you want features, or any regular database can work) and using RetrievalQA like so:

formatted_history = some code to convert the chat history into a string.

prompt = """some instructions. here is some context: {context} here is your chat history""" + formatted_history + """ remainder of the instructions {question} """

Now the chat history is already in the prompt when you pass it to the RetrievalQA chain.

madeovmetal avatar Aug 24 '23 15:08 madeovmetal

I think it'd better if there was a flag in the ConversationalRetreivalQAChain() where we can choose to not select the chain of question rephrasing before generation. Can this be considered as an issue and dealt with accordingly?

AshminJayson avatar Sep 04 '23 14:09 AshminJayson

@AshminJayson - there is: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L65

timxieICN avatar Oct 04 '23 17:10 timxieICN

There's a parameter called rephrase_question and you can set it to False: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L65

timxieICN avatar Oct 04 '23 17:10 timxieICN

is there any way to find the same parameter in js/ts?

grushaw4 avatar Oct 20 '23 07:10 grushaw4

i got the idea from madeovmetal, here is my condense_question_template

condense_question_template = """
    Return text in the original language of the follow up question.
    If the follow up question does not need context, return the exact same text back.
    Never rephrase the follow up question given the chat history unless the follow up question needs context.
    
    Chat History: {chat_history}
    Follow Up question: {question}
    Standalone question:
"""
condense_question_prompt = PromptTemplate.from_template(condense_question_template)

chat = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory,
                                             condense_question_prompt=condense_question_prompt)

two working example:

Chat History:
Human: hello
Assistant: Hello! How can I assist you today?

Follow Up question: hi
Standalone question: hi

also it can rephrase based on history

Chat History:
Human: what is large language model
Assistant: A Large Language Model (LLM) is a powerful artificial intelligence model capable of understanding and generating human language text, used in a wide range of natural language processing tasks.

Follow Up question: what can it be used for
Standalone question: what can a Large Language Model be used for?

changlingao avatar Oct 28 '23 00:10 changlingao

i got the idea from madeovmetal, here is my condense_question_template

condense_question_template = """
    Return text in the original language of the follow up question.
    If the follow up question does not need context, return the exact same text back.
    Never rephrase the follow up question given the chat history unless the follow up question needs context.
    
    Chat History: {chat_history}
    Follow Up question: {question}
    Standalone question:
"""
condense_question_prompt = PromptTemplate.from_template(condense_question_template)

chat = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory,
                                             condense_question_prompt=condense_question_prompt)

two working example:

Chat History:
Human: hello
Assistant: Hello! How can I assist you today?

Follow Up question: hi
Standalone question: hi

also it can rephrase based on history

Chat History:
Human: what is large language model
Assistant: A Large Language Model (LLM) is a powerful artificial intelligence model capable of understanding and generating human language text, used in a wide range of natural language processing tasks.

Follow Up question: what can it be used for
Standalone question: what can a Large Language Model be used for?

Yes i tried this. It works alright. But cannot fully trust this. It is a very rendundant step actually - sending the prompt to OpenAI and getting it rephrased and then again sending it to the retriever. I wish I could skip it.

vijay-ravi avatar Feb 06 '24 23:02 vijay-ravi

There's a parameter called rephrase_question and you can set it to False: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L65

Doesn't work for me. Can somebody share an example where this worked?

vijay-ravi avatar Feb 06 '24 23:02 vijay-ravi

To anyone who found that the LLM input isn't aligned even if rephrase_question is set to False, I notice that although the question for LLM itself keeps unchanged, the query for retrieving docs uses the rephrased question (as shown in code below), which results in degraded retrieval and generation result in my case.

https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L155

Finally turned to using RetrievalQA and adding history manually

Altair-Alpha avatar Feb 07 '24 07:02 Altair-Alpha

i got the idea from madeovmetal, here is my condense_question_template

condense_question_template = """
    Return text in the original language of the follow up question.
    If the follow up question does not need context, return the exact same text back.
    Never rephrase the follow up question given the chat history unless the follow up question needs context.
    
    Chat History: {chat_history}
    Follow Up question: {question}
    Standalone question:
"""
condense_question_prompt = PromptTemplate.from_template(condense_question_template)

chat = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory,
                                             condense_question_prompt=condense_question_prompt)

two working example:

Chat History:
Human: hello
Assistant: Hello! How can I assist you today?

Follow Up question: hi
Standalone question: hi

also it can rephrase based on history

Chat History:
Human: what is large language model
Assistant: A Large Language Model (LLM) is a powerful artificial intelligence model capable of understanding and generating human language text, used in a wide range of natural language processing tasks.

Follow Up question: what can it be used for
Standalone question: what can a Large Language Model be used for?

If work for me, but you can help me hidden the rephase question, Thanks.

nhkhangit avatar May 15 '24 05:05 nhkhangit

I did it like this:

custom_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a >>standalone question. Preserve the ogirinal question in the answer setiment during rephrasing.

Chat History: {chat_history} Follow Up Input: {question} Standalone question:""" CONDENSE_QUESTION_PROMPT_CUSTOM = PromptTemplate.from_template(custom_template)



    
      
    

      
    

    
  
pass in condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM like below. Here PROMPT is my custom prompt and condense_question_prompt is the the above: `qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectordb.as_retriever(search_kwargs={"k": 3,"search_type":"mmr"}),combine_docs_chain_kwargs={'prompt': PROMPT},verbose=True,condense_question_prompt=CONDENSE_QUESTION_PROMPT_CUSTOM )`

How did you handle context here?

basically you need to set two prompt, in first prompt (condense question prompt custom ) you will deal with rephrase and in the second prompt you will handel three things, follow up question, context and chat history, here is below complete solution

custom_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Preserve the ogirinal question in the answer setiment during rephrasing.

    Chat History:
    {chat_history}
    Follow Up Input: {question}
    Standalone question:"""
    self.CONDENSE_QUESTION_PROMPT_CUSTOM = PromptTemplate.from_template(custom_template)
    
    
    template = """
    You are a virtual assistant............

    CONTEXT:
    {context}

    CHAT HISTORY: 
    {chat_history}

    Follow Up Input: 
    {question}
    """

    # Initialize the prompt
    self.QA_PROMPT = PromptTemplate(template=template, input_variables=[
                        "context","chat_history", "question"])



        # Build a second Conversational Retrieval Chain
        second_chain = ConversationalRetrievalChain.from_llm(
            self.llm,
            retriever=self.vectordb.as_retriever(),
            memory=retrieved_memory,
            combine_docs_chain_kwargs={"prompt": self.QA_PROMPT},
            verbose=True,
            condense_question_prompt=self.CONDENSE_QUESTION_PROMPT_CUSTOM,
            get_chat_history=lambda h : h
        )

abusufyanvu avatar Jul 01 '24 07:07 abusufyanvu