Azure Wang

Results 7 comments of Azure Wang

> @azurewtl Thanks for your improvements. It works when I call `define_log_level(print_level="DEBUG")` in my Python script entry if I want to print the LLM stream log. However, I'm considering whether...

> Thank you for your feedback. The issue isn't caused by `reduce_message_length`, but rather by incorrect usage of the `reserved` parameter when using `reduce_message_length`. In the v0.6-release branch, the source...

STEP1: `max_token` is calculated by `token_max - token_of_sys - reserved` ```python max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved ``` STEP2: msg is compared with `max_token`, so this `max_token`...

In `metagpt/actions/research.py` line: 137 ```python prompt = reduce_message_length(gen_msg(), model_name, system_text, 0) logger.debug(prompt) queries = await self._aask(prompt, [system_text]) ``` seems like the reduced result if fed into llm as `prompt`, so...

I see! token are shared between prompt and completion. Thanks

I can't read the message of unit test. This PR update token_max of openai according to latest document, which solve a known issue. could someone help to on get this...