langchain
langchain copied to clipboard
Can't use human_prefix and ai_prefix with agent
System Info
Langchain 0.0.223 Linux Python 3.11
Who can help?
@hwchase17 @agola11
Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
Reproduction
something like this
chat_history = PostgresChatMessageHistory( # Just a slight mod of the postgres class for sorting the results by date
connection_string=config('SUPABASE_POSTGRES_CONNECT_STRING'),
session_id="58964243-23cd-41fe-ad05-ecbfd2a73202", # str(uuid.uuid4()),
table_name="chat_history"
)
memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=chat_history,
return_messages=True,
human_prefix="USER", # This doesn't work.
ai_prefix="ASSISTANT",) # This doesn't work.
agent = ChatAgent(
name="Chat Assistant",
tools=_tools,
agent_type=AgentType.OPENAI_MULTI_FUNCTIONS,
llm=openai
)
If I look at what was prompted and in postgres, it always shows "Human" and "AI"
Expected behavior
I expect USER and ASSISTANT to be used everywhere after I set it. I see this as especially important when using openai's chat endpoint since their models were trained using these tokens.
I also think it would be better to load the memory/chat history as the openai API provides parameters for (As a list of messages) instead of in the SYSTEM message, but perhaps that's for another issue.
Answer generated by a 🤖
Answer
I understand that you're having trouble using the human_prefix
and ai_prefix
parameters with the ConversationBufferMemory
object in LangChain. You're expecting to see "USER" and "ASSISTANT" as prefixes in your conversation history, but instead, you're seeing "Human" and "AI".
The issue here is that the human_prefix
and ai_prefix
parameters in the ConversationBufferMemory
object don't directly control the prefixes used in the conversation. They only control how the conversation is stored in memory. To change the prefixes used in the conversation, you need to modify the PromptTemplate
and ConversationChain
objects.
Here's how you can do it:
from langchain.prompts.prompt import PromptTemplate
# Define a new prompt template with your desired prefixes
template = """
Current conversation:
{history}
USER: {input}
ASSISTANT:"""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
# Use the new prompt template in the ConversationChain
conversation = ConversationChain(
prompt=PROMPT,
llm=llm,
verbose=True,
memory=ConversationBufferMemory(),
)
# Now when you use the conversation object, the prefixes will be "USER" and "ASSISTANT"
conversation.predict(input="Hi there!")
In this code, we're defining a new PromptTemplate
where the prefixes are "USER" and "ASSISTANT". We then use this new prompt template in the ConversationChain
object. Now, when you use the predict
method of the ConversationChain
object, the prefixes used in the conversation will be "USER" and "ASSISTANT".
This is one potential solution to your issue. If you believe this is a bug that could impact other users, you're welcome to make a pull request with a proposed fix. If this solution doesn't align with your experience, I would recommend checking if you're using the latest version of LangChain.
Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
Hi, @bitnom! I'm Dosu, and I'm here to help the LangChain team manage our backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you reported an issue regarding the human_prefix
and ai_prefix
parameters not working as expected when used with the agent
in the provided code. There was a suggestion from me to modify the PromptTemplate
and ConversationChain
objects to control the prefixes used in the conversation. However, it seems that both you and @ClementViricel have reacted negatively to this proposed solution.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!
Still got this issue with langchian 1.1.14.