MetaGPT
MetaGPT copied to clipboard
Making Data interpreter able to repair llm output, similar to what is done in ActionNode
Feature description
Using open-source models in the data interpreter often leads to issues with incapability of generating correct JSON format . I tried to add repair_llm_output: true
in the config2.yaml but it didn't work.
by checking the source code, I noticed that the repair work is conducted in ActionNode, while my issue occurs in WriteAnalysisCode:
- The model generated incorrect JSON response and passed it into the CodeParser
- The parser did not match the regex
- the logger reported an error, then directly returned the incorrectly formatted text.
- The incorrectly formatted text was passed back to the action, where it failed during decoding with
reflection = json.loads(CodeParser.parse_code(block=None, text=rsp))
The repair mechanism was not triggered.
Therefore, it is ideal to adapt the repair functionality for the data interpreter as well.
Your Feature Enabling Data Interpreter to postprocess llm output
FYI: @garylin2099