langchain icon indicating copy to clipboard operation
langchain copied to clipboard

ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)

Open levalencia opened this issue 1 year ago • 3 comments

System Info

langchain 0.0.206 python 3.11.3

Who can help?

No response

Information

  • [ ] The official example notebooks/scripts
  • [ ] My own modified scripts

Related Components

  • [ ] LLMs/Chat Models
  • [ ] Embedding Models
  • [X] Prompts / Prompt Templates / Prompt Selectors
  • [ ] Output Parsers
  • [ ] Document Loaders
  • [ ] Vector Stores / Retrievers
  • [ ] Memory
  • [ ] Agents / Agent Executors
  • [ ] Tools / Toolkits
  • [ ] Chains
  • [ ] Callbacks/Tracing
  • [ ] Async

Reproduction

Code


tfretriever = TFIDFRetriever.from_texts(
            ["My name is Luis Valencia",
            "I am 70 years old",
            "I like gardening, baking and hockey"])
        
        template = """
            Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
            ------
            <ctx>
            {context}
            </ctx>
            ------
            <hs>
            {chat_history}
            </hs>
            ------
            {question}
            Answer:
            """
        
        prompt = PromptTemplate(
            input_variables=["chat_history", "context", "question"],
            template=template,
            )
        
        st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(llm, 
                                                                                vectordb.as_retriever(),
                                                                                memory=memory,
                                                                                  chain_type_kwargs={
                                                                                    "verbose": True,
                                                                                    "prompt": prompt,
                                                                                    "memory": ConversationBufferMemory(
                                                                                        memory_key="chat_history",
                                                                                        input_key="question"),
                                                                                })

Error:

ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)

Expected behavior

I should be able to provide custom context to my conversational retrieval chain, without custom prompt it works and gets good answers from vector db, but I cant use custom prompts

levalencia avatar Jun 23 '23 08:06 levalencia

Answer generated by a 🤖

Answer

The error you're encountering is due to the fact that you're passing the chain_type_kwargs dictionary directly to the ConversationalRetrievalChain.from_llm() method. Instead, you should pass the prompt and memory arguments directly to the method.

python
tfretriever = TFIDFRetriever.from_texts(
    ["My name is Luis Valencia",
     "I am 70 years old",
     "I like gardening, baking and hockey"])

template = """
    Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
    ------
    <ctx>
    {context}
    </ctx>
    ------
    <hs>
    {chat_history}
    </hs>
    ------
    {question}
    Answer:
    """

prompt = PromptTemplate(
    input_variables=["chat_history", "context", "question"],
    template=template,
)

st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(
    llm,
    vectordb.as_retriever(),
    memory=memory,
    verbose=True,
    prompt=prompt,
    memory=ConversationBufferMemory(
        memory_key="chat_history",
        input_key="question"
    )
)

By passing the prompt and memory arguments directly, you should be able to use custom prompts without encountering the ValidationError

Sources


This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

devstein avatar Jun 23 '23 15:06 devstein

that doesnt work but the error changed.

1 validation error for ConversationalRetrievalChain prompt extra fields not permitted (type=value_error.extra)

levalencia avatar Jun 23 '23 22:06 levalencia

updating to langchain-0.0.228 solved the issue in my case

gennaro-dibrino-cardoai avatar Jul 08 '23 10:07 gennaro-dibrino-cardoai

@gennaro-dibrino-cardoai , I updated langchain to 0.0.228, but still it didn't work. Can you show the sample code snippet explaining your implementation of ConversationalRetrievalChainwith chain_type_kwargs (to be more specific custom prompt engineering with ConversationalRetrievalChain)

Koushik15042000 avatar Jul 13 '23 22:07 Koushik15042000

@levalencia Did updating to langchain-0.0.228 work for you?

Koushik15042000 avatar Jul 13 '23 22:07 Koushik15042000

@Koushik15042000 Were you able to figure this out?

shivanipatel7 avatar Jul 25 '23 22:07 shivanipatel7

qa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, chain_type="stuff", get_chat_history=lambda h : h, combine_docs_chain_kwargs={'prompt': base_template}, memory = memory ) @shivanipatel7, You can try this way. It works.

Koushik15042000 avatar Jul 25 '23 22:07 Koushik15042000

@Koushik15042000 This worked for me!! Thank you so much

shivanipatel7 avatar Jul 25 '23 22:07 shivanipatel7

qa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, chain_type="stuff", get_chat_history=lambda h : h, combine_docs_chain_kwargs={'prompt': base_template}, memory = memory ) @shivanipatel7, You can try this way. It works.

This is not working, is there any documentation related this problem ?

imrankh46 avatar Jul 26 '23 16:07 imrankh46

qa = ConversationalRetrievalChain.from_llm( llm, retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, condense_question_llm = llm, chain_type="stuff", get_chat_history=lambda h : h, # combine_docs_chain_kwargs={'prompt': base_template}, # memory = memory ) qa.combine_docs_chain.llm_chain.prompt.messages[0] = SystemMessagePromptTemplate.from_template(sys_prompt)

    Define your own sys_prompt, set the context and try. Hope it works.

Koushik15042000 avatar Jul 31 '23 23:07 Koushik15042000

Having same problem with latest version. The fixes proposed relate condense template, while the issue is with prompt template.

realSAH avatar Nov 03 '23 04:11 realSAH