langchain
langchain copied to clipboard
Improving Resilience of MRKL Agent
This is a highly optimized update to the pull request https://github.com/hwchase17/langchain/pull/3269
Summary:
- Added ability to MRKL agent to self solve the ValueError(f"Could not parse LLM output:
{llm_output}
") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:". - The way I am solving this error is by responding back to the llm with the messages "Invalid Format: Missing 'Action:' after 'Thought:'" & "Invalid Format: Missing 'Action Input:' after 'Action:'" whenever Action: and Action Input: are not present in the llm output respectively.
For a detailed explanation, look at the previous pull request.
New Updates:
- Since @hwchase17 , requested in the previous PR to communicate the self correction (error) message, using the OutputParserException, I have added new ability to the OutputParserException class to store the observation & previous llm_output in order to communicate it to the next Agent's prompt. This is done, without breaking/modifying any of the functionality OutputParserException previously performs (i.e. OutputParserException can be used in the same way as before, without passing any observation & previous llm_output too).
@vowelparrot kindly check out this pull request.
Sure, I have changed the default value back to False. You can merge this PR now (@vowelparrot, @hwchase17).
Also as I asked in the previous PR https://github.com/hwchase17/langchain/pull/3269, do you want me to apply this feature to react, self_ask_with_search & conversational agents too, in a future PR?