langchain
langchain copied to clipboard
Error when overriding default prompt template of ConversationChain
Hi, does anyone know how to override the prompt template of ConversationChain? I am creating a custom prompt template that takes in an additional input variable
PROMPT_TEMPLATE = """ {my_info}
{history}
Human: {input}
AI:"""
PROMPT = PromptTemplate(
input_variables=["history", "input", "my_info"], template=PROMPT_TEMPLATE
)
conversation_chain = ConversationChain(
prompt=PROMPT,
llm=OpenAI(temperature=0.7),
verbose=True,
memory=ConversationBufferMemory()
)
but got the following error:
Got unexpected prompt input variables. The prompt expects ['history', 'input', 'my_info'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
Is my understanding correct that currently ConversationChain can only support prompt template that takes in "history" and "input" as the input variables?
yes you're correct - conversation chain only currently allows for a single input, that being the input key, and then also history (coming from memory)
How should I handle this situation if I want to add context?
system_template="""Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
{context}"""
messages = [
SystemMessagePromptTemplate.from_template(system_template),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
]
I'm looking into this issue as well. Is it possible to add additional context and to the systemprompt for ConversationChain?
I am facing the similar issue. There are use cases when we need customization for prompt in ConversationChain. Current implementation does not support that and really hope langchain can support customize prompt and make conversationchain more flexible and even better to consider different prompt as conversation goes. As I am writing this, to me, it sounds maybe more like Agent instead of Chain, is there a agent class that is capable of customize prompt? or Conversation Agent that is capable of using different prompt and pass along chat history as conversation goes?
I've opened a PR that I believe addresses this issue. My understanding is that currently Conversation chain's memory does not inherit the conversation chain's input_key
and we try to deduce it with get_prompt_input_key
assuming there are only memory, input and stop
variables in the prompt. The PR suggests that the memory of the conversation chain inherits the conversation chain's input_key
.
As exemplified in the PR, with the proposed change we can use a system prompt template such as the following:
system_msg_template = SystemMessagePromptTemplate.from_template(template="You are a translator helping me in translating from {input_language} to {output_language}. " +
"Please translate the messages I type.")
What do you think @hwchase17?
I'm looking into this issue as well,
any update on this issue
I need this case too.
Same for me, it would be a lot easier to get good results for other language than English and improve the first response
Any fix? I'm only using the two variables and it still doesn't work.
??
What's the current work around for this? Using a single prompt template that has an input putting the SystemMessage and the HumanMessage there?
This is very weird. I can get ConversationChain to work with multiple inputs in the JS library, but it fails in python. I moved my whole app to python to make it faster.. and now this. :-(
I also encounter the same problem.It seems that you can't use customized variable to replace the "input" placeholder.
Do we have any updates on this one?
@hwchase17
I too am waiting on this.
same here :)
Same here
Same here
As a workaround I'm just subclassing the memory and use that instead like this...
class ExtendedConversationBufferMemory(ConversationBufferMemory):
extra_variables:List[str] = []
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables."""
return [self.memory_key] + self.extra_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return buffer with history and extra variables"""
d = super().load_memory_variables(inputs)
d.update({k:inputs.get(k) for k in self.extra_variables})
return d
...then initialize the chain with an instance of that memory class
llm_chain = ConversationChain(
llm=llm,
prompt=prompt,
memory=ExtendedConversationBufferMemory(extra_variables=["context"])
)
After that I just add the "context" (and any other extra variables) by passing them via the input when calling the chain:
result = llm_chain({"input": "some input", "context": "whatever context"})
I'm not sure whether this is a good way to do it, but it works fine in my case, so maybe it will also work for others.
Hi!
Is this what you all are looking for?
https://github.com/hwchase17/langchain/issues/5462
I was able to use this to set a new system message
Hi! Even though @Bananenfraese 's solution might work, I'd let template classes do template things. As someone said above, ConversationChain only allows to use 'history', and 'input' as input variables for the PromptTemplate, nothing more, nothing less. In you're interested in the related code, (and understand why @Bananenfraese 's solution works), check this: method:https://github.com/hwchase17/langchain/blob/fcb3a647997c6275e3d341abb032e5106ea39cac/langchain/chains/conversation/base.py#L44
However, you can just pass those values to the PromptTemplate and let it use them. Here is the example from @universe6666 modified:
PROMPT_TEMPLATE = """ {my_info}
{history}
Human: {input}
AI:"""
# define a custom PromptTemplate that supports your new variables
class CustomPromptTemplate(StringPromptTemplate):
my_info: str
def format(self, **kwargs) -> str:
kwargs['my_info']=self.my_info
return self.template.format(**kwargs)
# make sure you feed the PromptTemplate with the new variables
PROMPT = CustomPromptTemplate(
input_variables=["history", "input"], template=PROMPT_TEMPLATE, my_info="whatever"
)
conversation_chain = ConversationChain(
prompt=PROMPT,
llm=OpenAI(temperature=0.7),
verbose=True,
memory=ConversationBufferMemory()
)
Disclaimer: not tested but enough for you to know a clean way to solve this
Glad I saw this though I was missing something obvious. My horrid kludge is to add what I need to the beginning of the template e.g PROMPT.template = f'The current date is {todayFormat}.' + PROMPT.template
How should I handle this situation if I want to add context?
system_template="""Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- {context}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]
@tezer @universe6666 For what it's worth I use this pattern, but I inject contextual data into my main prompt (a Jinja2 template in a YAML file), and then make that the initial system message. Then you have the interaction frame and instructions, any contextual data, the chat history, then the next input.
You can:
- Have your main prompt take any number of contextual data variables.
- Create a
SystemMessagePromptTemplate
from a Jinja2 template like:smpt = SystemMessagePromptTemplate.from_template(my_prompt_template, template_format="jinja2")
- Format the prompt with your data like:
smpt = smpt.format(**context_data)
- Use it in a
ChatPromptTemplate
like this:
default_chat_prompt = ChatPromptTemplate.from_messages([
# SystemMessage, contains the prompt with context data injected above.
smpt,
# Placeholder for chat history.
MessagesPlaceholder(variable_name="history"),
# Incoming message from user.
HumanMessagePromptTemplate.from_template("{input}"),
])
Hi, @universe6666,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue you raised pertains to the mismatch between expected input variables in the prompt template and the actual input received in ConversationChain. It has garnered significant attention from the community, with discussions on potential solutions and alternative approaches. Notably, ulucinar has opened a pull request addressing the issue.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to LangChain!
extra_variables=["context"]
It's worked for me. Anybody know, do they fix this issue? (putting context in ConversationChain prompt at every run)