langchainjs
langchainjs copied to clipboard
initializeAgentExecutorWithOptions seems to remove newlines from response?
I am using it like so
const model = new OpenAI({modelName: "gpt-3.5-turbo", temperature: 0, verbose: true});
const chatModel = new ChatOpenAI({modelName: "gpt-4", verbose: true});
const messageHistory = new ChatMessageHistory(messages);
const memory = new ConversationSummaryMemory({
llm: model,
chatHistory: messageHistory,
memoryKey: "chat_history",
inputKey: "input",
returnMessages: true,
});
const tools = [
new SerpAPI(serpApiKey, {
location: "Austin,Texas,United States",
hl: "en",
gl: "us",
}),
new Calculator(),
];
const executor = await initializeAgentExecutorWithOptions(tools, chatModel, {
agentType: "chat-conversational-react-description",
agentArgs: {systemMessage: systemPrompt},
verbose: true,
memory,
});
works fine though if i just use the ChatOpenAI object directly:
const chatModel = new ChatOpenAI({modelName: "gpt-4", verbose: true});
chatModel.call([new HumanChatMessage("What are 5 rules to live by?");
Any pointers?
Figured it out...its a issue with the FORMAT_INSTRUCTIONS
needs to add "and make sure to use valid json newline characters." and be:
export const FORMAT_INSTRUCTIONS = `RESPONSE FORMAT INSTRUCTIONS
----------------------------
When responding to me please, please output a response in one of two formats:
**Option 1:**
Use this if you want the human to use a tool.
Markdown code snippet formatted in the following schema:
\`\`\`json
{{{{
"action": string, \\ The action to take. Must be one of {tool_names}
"action_input": string \\ The input to the action
}}}}
\`\`\`
**Option #2:**
Use this if you want to respond directly to the human. Markdown code snippet formatted in the following schema:
\`\`\`json
{{{{
"action": "Final Answer",
"action_input": string \\ You should put what you want to return to use here and make sure to use valid json newline characters.
}}}}
\`\`\``;
@nfcampos would this make sense to open a PR for?
and should it be configurable? Either by allowing to pass a custom prompt in or via a parameter?
can you show me the difference between the output you're getting and the output you expected? So I can understand if this change is desirable for everyone, and therefore we can avoid adding another option
without
{
"action": "Final Answer",
"action_input": "1. Treat others as you would like to be treated. 2. Always be honest and true to yourself. 3. Embrace learning and personal growth. 4. Practice gratitude and focus on the positive aspects of life. 5. Take responsibility for your actions and strive to make a positive impact on the world."
}
and with (note the \n newline characters)
{
"action": "Final Answer",
"action_input": "1. Treat others as you would like to be treated.\\n2. Always be honest and true to yourself.\\n3. Embrace learning and personal growth.\\n4. Practice gratitude and focus on the positive aspects of life.\\n5. Take responsibility for your actions and strive to make a positive impact on the world."
}
Thanks, I think that's a change that is good for everyone, we shouldn't be modifying what the LLM sends inside a json field, I'd be happy to review a PR for this if you want to open one?
here u go https://github.com/hwchase17/langchainjs/pull/1002
I think this issue still happens? It happens regardless if the AI uses search or not.
Yeah, I found tons more cases where formatting gets removed across the stack after fixing this one
It also explodes if the response has a markdown codeblock