langchain
langchain copied to clipboard
How can I structure prompt temple for RetrievalQAWithSourcesChain with ChatOpenAI model
Hello, I and deploying RetrievalQAWithSourcesChain with ChatOpenAI model right now. Unlike OpenAI model, you can provide system message for the model which is a great complement. But I tried many times, it seems the prompt can not be insert into the chain. Please suggest what should I do to my code:
#Prompt Construction
template="""You play as {user_name}'s assistant,your name is {name},personality is {personality},duty is {duty}"""
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="""
Context: {context}
Question: {question}
please indicate if you are not sure about answer. Do NOT Makeup.
MUST answer in {language}."""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
ChatPromptTemplate.input_variables=["context", "question","name","personality","user_name","duty","language"]
#define the chain
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm,
combine_documents_chain=qa_chain,
chain_type="stuff",
retriever=compression_retriever,
chain_type_kwargs = {"prompt": chat_prompt}
)
I am not really sure what the goal is? RetrievalQAWithSourcesChain
is a specialized chain that doesn't really work well in a conversational setting standalone.
Imho the right approach is to use a Chat Conversation Agent as I´ve just described here:
https://github.com/hwchase17/langchain/issues/3523#issuecomment-1523936882
There you can easily modify the System prompt an input variables as desired and the Agent will use all this when doing QA with sources: https://github.com/hwchase17/langchain/blob/85dae78548ed0c11db06e9154c7eb4236a1ee246/langchain/agents/conversational_chat/base.py#L59
EDIT: There is also a conversational version of the chain, but as far as I see that doesn't leverage chat prompt templates as well and imho the solution mentioned above will work better for your use case: https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html#conversationalretrievalchain-with-question-answering-with-sources
I am not really sure what the goal is?
RetrievalQAWithSourcesChain
is a specialized chain that doesn't really work well in a conversational setting standalone.Imho the right approach is to use a Chat Conversation Agent as I´ve just described here:
There you can easily modify the System prompt an input variables as desired and the Agent will use all this when doing QA with sources:
https://github.com/hwchase17/langchain/blob/85dae78548ed0c11db06e9154c7eb4236a1ee246/langchain/agents/conversational_chat/base.py#L59
EDIT: There is also a conversational version of the chain, but as far as I see that doesn't leverage chat prompt templates as well and imho the solution mentioned above will work better for your use case: https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html#conversationalretrievalchain-with-question-answering-with-sources
thanks, I will try the agent approach. Checked the document you send, this should be the right way.
Hi, @SaaS1973! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is about structuring a prompt template for the RetrievalQAWithSourcesChain with ChatOpenAI model. It seems that jphme suggested using a Chat Conversation Agent instead, and even provided an example and code modifications. You responded by thanking jphme and agreeing to try the agent approach.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!