chain_type "refine" error with ChatOpenAI in ConversationalRetrievalChain
I'm trying to build Chat bot with ConversationalRetrievalChain, and got this error when trying to use "refine" chain type
File "/Users/chris/.pyenv/versions/3.10.10/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 218, in load_qa_chain
return loader_mapping[chain_type](
File "/Users/chris/.pyenv/versions/3.10.10/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 176, in _load_refine_chain
return RefineDocumentsChain(
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for RefineDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
question_gen_llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
verbose=True,
callback_manager=question_manager,
)
streaming_llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
streaming=True,
callback_manager=stream_manager,
verbose=True,
temperature=0.3,
)
question_generator = LLMChain(
llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
)
combine_docs_chain = load_qa_chain(
streaming_llm, chain_type="refine", prompt=QA_PROMPT, callback_manager=manager
)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=combine_docs_chain,
question_generator=question_generator,
callback_manager=manager,
verbose=True,
return_source_documents=True,
)
I think you should check here that refine chain has no parameter called prompt instead you should use question_prompt as input.
I'm getting the same error while calling ChatVectorDBChain
the error I'm getting is for chain_type = refine,map_reduce, or map_rerank
the code
from langchain.vectorstores.weaviate import Weaviate
from langchain.llms import OpenAI
from langchain.chains import ChatVectorDBChain
vectorstore = Weaviate(client, "DOCUMENTS", "content")
MyOpenAI = OpenAI (openai_api_key="sk-",)
qa = ChatVectorDBChain.from_llm(llm=MyOpenAI, vectorstore=vectorstore,return_source_documents=True,chain_type='refine')
error stack
ValidationError Traceback (most recent call last)
Cell In[93], line 1
----> 1 qa = ChatVectorDBChain.from_llm(llm=MyOpenAI, vectorstore=vectorstore,return_source_documents=True,chain_type='refine')
File [~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py:218](https://file+.vscode-resource.vscode-cdn.net/Users/abdurrahmanbeyaz/Desktop/Projects/researches/~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py:218), in ChatVectorDBChain.from_llm(cls, llm, vectorstore, condense_question_prompt, qa_prompt, chain_type, **kwargs)
207 @classmethod
208 def from_llm(
209 cls,
(...)
215 **kwargs: Any,
216 ) -> BaseConversationalRetrievalChain:
217 """Load chain from LLM."""
--> 218 doc_chain = load_qa_chain(
219 llm,
220 chain_type=chain_type,
221 prompt=qa_prompt,
222 )
223 condense_question_chain = LLMChain(llm=llm, prompt=condense_question_prompt)
224 return cls(
225 vectorstore=vectorstore,
226 combine_docs_chain=doc_chain,
227 question_generator=condense_question_chain,
228 **kwargs,
229 )
File [~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/question_answering/__init__.py:218](https://file+.vscode-resource.vscode-cdn.net/Users/abdurrahmanbeyaz/Desktop/Projects/researches/~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/question_answering/__init__.py:218), in load_qa_chain(llm, chain_type, verbose, callback_manager, **kwargs)
213 if chain_type not in loader_mapping:
214 raise ValueError(
215 f"Got unsupported chain type: {chain_type}. "
216 f"Should be one of {loader_mapping.keys()}"
217 )
--> 218 return loader_mapping[chain_type](
219 llm, verbose=verbose, callback_manager=callback_manager, **kwargs
220 )
File [~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/question_answering/__init__.py:176](https://file+.vscode-resource.vscode-cdn.net/Users/abdurrahmanbeyaz/Desktop/Projects/researches/~/opt/anaconda3/envs/research/lib/python3.9/site-packages/langchain/chains/question_answering/__init__.py:176), in _load_refine_chain(llm, question_prompt, refine_prompt, document_variable_name, initial_response_name, refine_llm, verbose, callback_manager, **kwargs)
169 _refine_llm = refine_llm or llm
170 refine_chain = LLMChain(
171 llm=_refine_llm,
172 prompt=_refine_prompt,
173 verbose=verbose,
174 callback_manager=callback_manager,
175 )
--> 176 return RefineDocumentsChain(
177 initial_llm_chain=initial_chain,
178 refine_llm_chain=refine_chain,
179 document_variable_name=document_variable_name,
180 initial_response_name=initial_response_name,
181 verbose=verbose,
182 callback_manager=callback_manager,
183 **kwargs,
184 )
File [~/opt/anaconda3/envs/research/lib/python3.9/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/abdurrahmanbeyaz/Desktop/Projects/researches/~/opt/anaconda3/envs/research/lib/python3.9/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RefineDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
I am also getting error while using anything apart from stuff
I made it work by just changing: doc_chain = load_qa_chain( streaming_llm, chain_type="refine", prompt=QA_PROMPT ) to: doc_chain = load_qa_chain( streaming_llm, chain_type="refine", refine_prompt=QA_PROMPT )
condense_question_prompt
condense_question_prompt
?!
Hi, @chrischjh! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
Based on my understanding, the issue you reported is related to the "refine" chain type in the ConversationalRetrievalChain of the ChatOpenAI library. It seems that there was an error when trying to load the refine chain, possibly due to extra fields not being permitted in the prompt. Some users, like hkaraoguz, alabrashJr, and prasoons075, also reported similar errors. However, jsandlerus provided a solution by changing the prompt parameter to refine_prompt, which resolved the issue.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository!