Tools outputs truncated during execution?
Bug description
I'm calling a tool, inside DataInterpreter - foo() - whch returns "data". The output of the tool always gets truncated, to approx. 1918 characters. This kills my execution, since I lose a lot of context.
Bug solved method
Environment information
- LLM type and model name:
- System version:
- Python version:
- 3.10
- MetaGPT version or branch: git clone https://github.com/geekan/MetaGPT
- packages version:
- installation method:
Screenshots or logs
Is it reproducible, could you provide more details, like the code that was executed. Additionally, does this issue describe the same problem as this: https://github.com/geekan/MetaGPT/issues/1162
No, #1162 is an independent bug, somewhat related to this, but needs attention or a workaround.
Code I executed :
from metagpt.roles.di.data_interpreter import DataInterpreter from metagpt.tools.tool_registry import register_tool
@register_tool() def magic_function(arg1: str, arg2: int, arg3: str) -> dict: """ The magic function that does something.
Args:
arg1 (str): ...
arg2 (int): ...
Returns:
dict: ...
"""
foo()
return {"arg1": arg1 * 3, "arg2": arg2 * 5, "arg3" : arg3}
@register_tool() def foo(): """ Just return dummy output for testing """ return """<4000 character string - could be anything>"""
async def main(): di = DataInterpreter(tools=["magic_function", "foo"]) await di.run("""Call foo first and store its return value as "output". Then call the magic function with arg1 'A' and arg2 2 and arg3 output. Tell me the result.""")
if name == "main": import asyncio asyncio.run(main())
The reason is that parse_outputs has a parameter keep_len with a default value of 2000, which should be configurable later on.
You can try to modify it manually to see the effect.
Brilliant ! I changed the keep_len to 10000 and my outputs are fine now. Maybe make it a configurable parameter? Thank you for your prompt response.