langchain
langchain copied to clipboard
How to use a ConversationChain with PydanticOutputParser
How can I create a ConversationChain that uses a PydanticOutputParser for the output?
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
parser = PydanticOutputParser(pydantic_object=Joke)
system_message_prompt = SystemMessagePromptTemplate.from_template("Tell a joke")
# If I put it here I get `KeyError: {'format_instructions'}` in `/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)`
# system_message_prompt.prompt.output_parser = parser
# system_message_prompt.prompt.partial_variables = {"format_instructions": parser.get_format_instructions()}
human_message_prompt = HumanMessagePromptTemplate.from_template("{input}")
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt,human_message_prompt,MessagesPlaceholder(variable_name="history")])
# This runs but I don't get any JSON back
chat_prompt.output_parser = parser
chat_prompt.partial_variables = {"format_instructions": parser.get_format_instructions()}
memory=ConversationBufferMemory(return_messages=True)
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, prompt=chat_prompt, verbose=True, memory=memory)
conversation.predict(input="Tell me a joke")
> Entering new ConversationChain chain...
Prompt after formatting:
System: Tell a joke
Human: Tell me a joke
> Finished chain.
'\n\nQ: What did the fish say when it hit the wall?\nA: Dam!'
I am having the same issue with LLMChain. It seems like it is not running the parser function at all.
I can leverage PydanticOutputParser with SystemMessagePromptTemplate by first creating PromptTemplate and then creating SystemMessagePromptTemplate with this instead of using SystemMessagePromptTemplate.from_template.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
parser = PydanticOutputParser(pydantic_object=Joke)
# see: https://python.langchain.com/en/latest/modules/prompts/chat_prompt_template.html
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}",
input_variables=[],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
human_message_prompt = HumanMessagePromptTemplate.from_template('{query}')
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
print(chat_prompt.format_prompt(query="Tell me a joke.").to_string())
The output:
System: Answer the user query.
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema:
\`\`\`
{"properties": {"setup": {"title": "Setup", "description": "question to set up a joke", "type": "string"}, "punchline": {"title": "Punchline", "description": "answer to resolve the joke", "type": "string"}}, "required": ["setup", "punchline"]}
\`\`\`
Human: Tell me a joke.
For me it worked when I added a LLMChain:
model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0.5)
# Define your desired data structure.
class Ideas(BaseModel):
brainstorm_ideas: List[str] = Field(description="list of ideas to brainstorm")
query = "Things to do in New York City"
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Ideas)
prompt = PromptTemplate(
template="Brainstorm ideas based on the query.\n{format_instructions}\n",
input_variables=[],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
human_message_prompt = HumanMessagePromptTemplate(prompt=PromptTemplate(
template="Query: {query}\n",
input_variables=["query"]
)
)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=model, prompt=chat_prompt)
# Get the result
output = chain.run(query=query)
# Convert JSON to Python object using the parser
result = parser.parse(output)
For me it worked when I added a LLMChain:
model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0.5) # Define your desired data structure. class Ideas(BaseModel): brainstorm_ideas: List[str] = Field(description="list of ideas to brainstorm") query = "Things to do in New York City" # Set up a parser + inject instructions into the prompt template. parser = PydanticOutputParser(pydantic_object=Ideas) prompt = PromptTemplate( template="Brainstorm ideas based on the query.\n{format_instructions}\n", input_variables=[], partial_variables={"format_instructions": parser.get_format_instructions()}, ) system_message_prompt = SystemMessagePromptTemplate(prompt=prompt) human_message_prompt = HumanMessagePromptTemplate(prompt=PromptTemplate( template="Query: {query}\n", input_variables=["query"] ) ) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) chain = LLMChain(llm=model, prompt=chat_prompt) # Get the result output = chain.run(query=query) # Convert JSON to Python object using the parser result = parser.parse(output)
The question is the usage with ConversationChain.
There is no way to figure out how to use this with output parser because the memory object doesn't provide the partial_variable input when you run it.
The {format_instructions}
parameter is missing in the SystemMessagePromptTemplate? (only "Tell a joke" there)
In my case, the output parser can be applied on the AI response in the ConversationChain, but it failed at the type check on the AIMessage object creation when the memory saves the response to the chat message history (chat_memory). The string is expected as the content but the a python object is provided.
Hi, @jarmitage! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you are experiencing a KeyError when setting the output parser for a ConversationChain with a PydanticOutputParser. Additionally, you mentioned that you are not getting any JSON back when running the code. Some other users have shared their experiences and suggested potential solutions, such as using a LLMChain or checking the format_instructions parameter in the SystemMessagePromptTemplate.
Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.
Thank you for your understanding and cooperation. We look forward to hearing from you soon.