MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

Making Data interpreter able to repair llm output, similar to what is done in ActionNode

Open usamimeri opened this issue 10 months ago • 1 comments

Feature description Using open-source models in the data interpreter often leads to issues with incapability of generating correct JSON format . I tried to add repair_llm_output: true in the config2.yaml but it didn't work.

by checking the source code, I noticed that the repair work is conducted in ActionNode, while my issue occurs in WriteAnalysisCode:

  1. The model generated incorrect JSON response and passed it into the CodeParser
  2. The parser did not match the regex
  3. the logger reported an error, then directly returned the incorrectly formatted text.
  4. The incorrectly formatted text was passed back to the action, where it failed during decoding with reflection = json.loads(CodeParser.parse_code(block=None, text=rsp))

The repair mechanism was not triggered. 20240422-180524

Therefore, it is ideal to adapt the repair functionality for the data interpreter as well.

Your Feature Enabling Data Interpreter to postprocess llm output

usamimeri avatar Apr 22 '24 10:04 usamimeri

FYI: @garylin2099

seehi avatar Apr 23 '24 02:04 seehi