langchain
langchain copied to clipboard
ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)
System Info
langchain 0.0.206 python 3.11.3
Who can help?
No response
Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
Reproduction
Code
tfretriever = TFIDFRetriever.from_texts(
["My name is Luis Valencia",
"I am 70 years old",
"I like gardening, baking and hockey"])
template = """
Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{question}
Answer:
"""
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(llm,
vectordb.as_retriever(),
memory=memory,
chain_type_kwargs={
"verbose": True,
"prompt": prompt,
"memory": ConversationBufferMemory(
memory_key="chat_history",
input_key="question"),
})
Error:
ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)
Expected behavior
I should be able to provide custom context to my conversational retrieval chain, without custom prompt it works and gets good answers from vector db, but I cant use custom prompts
Answer generated by a 🤖
Answer
The error you're encountering is due to the fact that you're passing the chain_type_kwargs
dictionary directly to the ConversationalRetrievalChain.from_llm()
method. Instead, you should pass the prompt and memory arguments directly to the method.
python
tfretriever = TFIDFRetriever.from_texts(
["My name is Luis Valencia",
"I am 70 years old",
"I like gardening, baking and hockey"])
template = """
Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{question}
Answer:
"""
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(
llm,
vectordb.as_retriever(),
memory=memory,
verbose=True,
prompt=prompt,
memory=ConversationBufferMemory(
memory_key="chat_history",
input_key="question"
)
)
By passing the prompt and memory arguments directly, you should be able to use custom prompts without encountering the ValidationError
Sources
- langchain/chains/question_answering/refine_prompts.py
- langchain/chains/conversation/base.py
- docs/extras/modules/memory/how_to/conversational_customization.ipynb
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
that doesnt work but the error changed.
1 validation error for ConversationalRetrievalChain prompt extra fields not permitted (type=value_error.extra)
updating to langchain-0.0.228 solved the issue in my case
@gennaro-dibrino-cardoai , I updated langchain to 0.0.228, but still it didn't work. Can you show the sample code snippet explaining your implementation of ConversationalRetrievalChainwith chain_type_kwargs (to be more specific custom prompt engineering with ConversationalRetrievalChain)
@levalencia Did updating to langchain-0.0.228 work for you?
@Koushik15042000 Were you able to figure this out?
qa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, chain_type="stuff", get_chat_history=lambda h : h, combine_docs_chain_kwargs={'prompt': base_template}, memory = memory ) @shivanipatel7, You can try this way. It works.
@Koushik15042000 This worked for me!! Thank you so much
qa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, chain_type="stuff", get_chat_history=lambda h : h, combine_docs_chain_kwargs={'prompt': base_template}, memory = memory ) @shivanipatel7, You can try this way. It works.
This is not working, is there any documentation related this problem ?
qa = ConversationalRetrievalChain.from_llm( llm, retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, condense_question_llm = llm, chain_type="stuff", get_chat_history=lambda h : h, # combine_docs_chain_kwargs={'prompt': base_template}, # memory = memory ) qa.combine_docs_chain.llm_chain.prompt.messages[0] = SystemMessagePromptTemplate.from_template(sys_prompt)
Define your own sys_prompt, set the context and try. Hope it works.
Having same problem with latest version. The fixes proposed relate condense template, while the issue is with prompt template.