langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Improving Resilience of MRKL Agent

Open svdeepak99 opened this issue 1 year ago • 2 comments

This is a highly optimized update to the pull request https://github.com/hwchase17/langchain/pull/3269

Summary:

  1. Added ability to MRKL agent to self solve the ValueError(f"Could not parse LLM output: {llm_output}") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:".
  2. The way I am solving this error is by responding back to the llm with the messages "Invalid Format: Missing 'Action:' after 'Thought:'" & "Invalid Format: Missing 'Action Input:' after 'Action:'" whenever Action: and Action Input: are not present in the llm output respectively.

For a detailed explanation, look at the previous pull request.

New Updates:

  1. Since @hwchase17 , requested in the previous PR to communicate the self correction (error) message, using the OutputParserException, I have added new ability to the OutputParserException class to store the observation & previous llm_output in order to communicate it to the next Agent's prompt. This is done, without breaking/modifying any of the functionality OutputParserException previously performs (i.e. OutputParserException can be used in the same way as before, without passing any observation & previous llm_output too).

svdeepak99 avatar May 20 '23 03:05 svdeepak99

@vowelparrot kindly check out this pull request.

svdeepak99 avatar May 20 '23 03:05 svdeepak99

Sure, I have changed the default value back to False. You can merge this PR now (@vowelparrot, @hwchase17).

Also as I asked in the previous PR https://github.com/hwchase17/langchain/pull/3269, do you want me to apply this feature to react, self_ask_with_search & conversational agents too, in a future PR?

svdeepak99 avatar May 20 '23 18:05 svdeepak99