langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Conversational Retriever Chain - condense_question_prompt parameter is not being considered.

Open varuntejay opened this issue 2 years ago • 1 comments

System Info

Langchain 0.0.198, Python 3.10 AWS Sagemaker environment

Who can help?

@agola11, @hwchase17

Information

  • [ ] The official example notebooks/scripts
  • [X] My own modified scripts

Related Components

  • [ ] LLMs/Chat Models
  • [ ] Embedding Models
  • [ ] Prompts / Prompt Templates / Prompt Selectors
  • [ ] Output Parsers
  • [ ] Document Loaders
  • [ ] Vector Stores / Retrievers
  • [ ] Memory
  • [ ] Agents / Agent Executors
  • [ ] Tools / Toolkits
  • [X] Chains
  • [ ] Callbacks/Tracing
  • [ ] Async

Reproduction

`from langchain.chains import ConversationalRetrievalChain import json from langchain.prompts.prompt import PromptTemplate prompt_template = """Answer based on context

Context: {context}

Question: {question}""" TEST_PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] )

question = 'How do I bake cake?'

chain = ConversationalRetrievalChain.from_llm(llm=llm, condense_question_prompt=TEST_PROMPT, retriever=retriever, return_source_documents=True, verbose=True) chat_history = []

chain({ "chat_history": chat_history, "question": question})

`

Expected behavior

Expected behavior is that it should take the given TEST_PROMPT, while sending the PROMPT to LLM, which is not doing in the original behavior

varuntejay avatar Jun 13 '23 09:06 varuntejay

This snippet should work:

prompt_template = """Answer based on context

{context}

Question: {question}"""
TEST_PROMPT = PromptTemplate(input_variables=["context", "question"], template=prompt_template)

chain = ConversationalRetrievalChain.from_llm(
  llm=llm, 
  combine_docs_chain_kwargs={"prompt": TEST_PROMPT},
  retriever=retriever, 
  return_source_documents=True, 
  verbose=True
)

ibizabroker avatar Jun 16 '23 06:06 ibizabroker

Hi, @varuntejay! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you reported an issue regarding the condense_question_prompt parameter not being considered in the Conversational Retriever Chain. The expected behavior is that the chain should take the given TEST_PROMPT while sending the prompt to the LLM, but this is not happening in the original behavior. ibizabroker suggests a snippet of code that should resolve the issue.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository!

dosubot[bot] avatar Sep 15 '23 16:09 dosubot[bot]