Guess
Guess
Thank you for your feedback. The issue isn't caused by `reduce_message_length`, but rather by incorrect usage of the `reserved` parameter when using `reduce_message_length`. In the v0.6-release branch, the source code...
The logic here is to gradually decrease the prompt length until it meets the prompt length requirement. The `config.llm.max_token` parameter represents the maximum length of the response generated by the...
```python max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved ``` is equal to ``` available_for_prompt = max_io_size_of_model - system_text_used - preallocated_for_response ``` "prompt" corresponds to the first parameter, and...
Is this Metagpt v0.6? I noticed that the format returned by LLM is incorrect. The expected format is `[CONTENT]json[/CONTENT]`. But the llm returns: ``` As a Product Manager, my goal...
The expected format is `[CONTENT]json[/CONTENT]`, which is required in prompt. No one can format the unexpected content except llm.
1. About `the generated code is not self-consistent`: LLM does not always produce good design and then write good code. I suggest you retry if it failed. 2. About `--run-tests...
`process_message` has been moved to `provider` foloder and rename to `format_msg` fixbug #1058 Your approach to the modification is correct. However, `LLM` has a set of functions for formatting messages,...
There are some conditions need to met before di using the tool: 1. Proper comments for the tool. Here is a demo: ```python async def create_repo(self, idea:str) -> Path: """...
`metagpt/tools/tool_recommend.py` line 95, set a breakpoint and see:
`register_tool` should be called at initialization. You can add your tool api into `metagpt/tools/libs/__init__.py`