qbc
qbc
Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the...
> > Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other...
Hello, sorry for the late response. The values in feature_importance evaluate the relative importance of each feature in the model's predictions. By analyzing feature importance, clients can identify which features...
Hello, your model configuration settings for ollama are correct, the following is all you need: ``` { "config_name": "my_ollama_chat_config", "model_type": "ollama_chat", "model_name": "llama3.2:latest", "options": { "temperature": 0.5, "seed": 123 },...
This happens because in the MarkdownJsonDictParser, the prompt setting instructs the LLM to generate responses in JSON format as follows: "Respond a JSON dictionary in a markdown's fenced code block...
主要原因是大模型的回复不符合预期导致的,具体可以参考`MarkdownJsonDictParser`的[源码](https://github.com/modelscope/agentscope/blob/90a9bda60c3a233e6bd7b5a96fc73ca8b0d67705/src/agentscope/parsers/json_object_parser.py#L132C7-L132C29),源码里的prompt是"Respond a JSON dictionary in a markdown's fenced code block as follows: \n\`\`\`json\n{content_hint}\n```",但是大模型的回复是"\`\`\` xxxx \`\`\`",导致提取内容出错。 具体细节可以看一下源码以及里面的tag_begin等参数。 修改方法的话,可以去修改MarkdownJsonDictParser的prompt以及对应的tag_begin参数,也可以用其他的parser,比如MultiTaggedContentParser。 详细教程也可以参考https://doc.agentscope.io/zh_CN/build_tutorial/structured_output.html#id3
This issue arises from the LLM's failure to correctly output the JSON format. The correct format should appear as follows: ```json { "thought": "I need to confirm if Player2 is...
You can use `pre_print_hook` to call TTS. As for accumulation, you can use a prefix to truncate and get incremental content. Please refer to the implementation of the [print](https://github.com/agentscope-ai/agentscope/blob/5c3a7705c3a922e8a41c88f91666c870737f9075/src/agentscope/agent/_agent_base.py#L198) function...
主要问题发生在当大模型流式返回工具调用的时候,会得到比如 `'{"command'`这样的字符串,但是,这个字符串经过`_json_loads_with_repair`函数里的`repair_json`之后,得到的是一个list `["command"]`,所以导致出错。一个解法是在`_json_loads_with_repair`的[这里](https://github.com/agentscope-ai/agentscope/blob/873cfe20252b4cba4fa63a5fc02f5fd5af97d46d/src/agentscope/_utils/_common.py#L43)做一下类型判断,因为当前agentscope里用到这个函数的地方期望得到的都是dict,所以可以先判断`json.loads(repaired)`的结果,如果不是dict,就返回一个空的dict`{}`
> [@qbc2016](https://github.com/qbc2016) 这个会在 框架侧修复吗. 目前还会遇到这个问题. 是的,我们准备让其强制返回字典,可以参考pr #1002