deer-flow icon indicating copy to clipboard operation
deer-flow copied to clipboard

planner可以正常出结果,但是_execute_agent_step中报错400

Open Ritian-Li opened this issue 7 months ago • 5 comments

感谢开源,报错信息如下:

================================== Ai Message ==================================
Name: planner

{
    "locale": "zh-CN",
    "has_enough_context": false,
    "thought": "用户想要了解大模型时代的技术发展情况,需从多个方面进行研究,以呈现全面且深入的技术发展态势。",
    "title": "大模型时代技术发展的研究计划",
    "steps": [
        {
            "need_web_search": true,
            "title": "大模型时代技术发展的历史与现状研究",
            "description": "收集大模型从起步到当前阶段的关键时间节点及标志性技术突破;各阶段代表性大模型的参数规模、训练数据量、应用领域等数据;当前主流大模型在自然语言处理、计算机视觉等主要领域的性能指标与应用成果。",
            "step_type": "research"
        },
        {
            "need_web_search": true,
            "title": "大模型技术发展的未来预测与利益相关者研究",
            "description": "收集各大研究机构与专家对大模型未来3 - 5年在技术能力、应用拓展等方面的预测报告;分析大模型技术发展涉及的主要利益相关者,如科技企业、科研机构、政府部门等的态度、投入与期望。",
            "step_type": "research"
        },
        {
            "need_web_search": true,
            "title": "大模型技术发展的风险与挑战研究",
            "description": "梳理大模型在数据隐私、伦理道德、计算资源消耗等方面面临的问题与挑战;收集已有的应对措施、政策法规及行业规范等相关资料。",
            "step_type": "research"
        }
    ]
}
2025-05-15 10:09:34,822 - src.graph.nodes - INFO - Research team is collaborating on tasks.
2025-05-15 10:09:34,823 - src.graph.nodes - INFO - Researcher node is researching.
2025-05-15 10:09:42,765 - src.graph.nodes - INFO - Executing step: 大模型时代技术发展的历史与现状研究
2025-05-15 10:09:42,985 - httpx - INFO - HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions "HTTP/1.1 400 Bad Request"
Traceback (most recent call last):
  File "/Users/ritian/work/deer-flow/main.py", line 146, in <module>
    ask(
  File "/Users/ritian/work/deer-flow/main.py", line 33, in ask
    asyncio.run(
  File "/Users/ritian/.local/share/uv/python/cpython-3.12.10-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Users/ritian/.local/share/uv/python/cpython-3.12.10-macos-aarch64-none/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/.local/share/uv/python/cpython-3.12.10-macos-aarch64-none/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/src/workflow.py", line 78, in run_agent_workflow_async
    async for s in graph.astream(
  File "/Users/ritian/work/deer-flow/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 2305, in astream
    async for _ in runner.atick(
  File "/Users/ritian/work/deer-flow/.venv/lib/python3.12/site-packages/langgraph/pregel/runner.py", line 444, in atick
    await arun_with_retry(
  File "/Users/ritian/work/deer-flow/.venv/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 128, in arun_with_retry
    return await task.proc.ainvoke(task.input, config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/.venv/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 583, in ainvoke
    input = await step.ainvoke(input, config, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/.venv/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 371, in ainvoke
    ret = await asyncio.create_task(coro, context=context)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/src/graph/nodes.py", line 427, in researcher_node
    return await _setup_and_execute_agent_step(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/src/graph/nodes.py", line 416, in _setup_and_execute_agent_step
    return await _execute_agent_step(state, agent, agent_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ritian/work/deer-flow/src/graph/nodes.py", line 338, in _execute_agent_step
    result = await agent.ainvoke(input=agent_input)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
...

openai.BadRequestError: Error code: 400 - {'error': {'code': 'InvalidParameter', 'message': "Invalid function format: 'type' Request id: 021747274983045b74caa216dfc0d2d7ac74b9263441db7c0e3e3", 'param': '', 'type': 'BadRequest'}}
During task with name 'agent' and id '099f2da0-c98f-e4b7-3312-1d910690fb67'
During task with name 'researcher' and id '6730d661-f5d6-2e0d-3764-2d6431ba0fad'

Ritian-Li avatar May 15 '25 02:05 Ritian-Li

+1

Sakura4036 avatar May 16 '25 01:05 Sakura4036

Could you provide which model you are using?

foreleven avatar May 16 '25 03:05 foreleven

Could you provide which model you are using?

AGENT_LLM_MAP: dict[str, LLMType] = {
    "coordinator": "basic",
    "planner": "reasoning",
    "researcher": "reasoning",
    "coder": "basic",
    "reporter": "reasoning",
    "podcast_script_writer": "basic",
    "ppt_composer": "basic",
    "prose_writer": "basic",
}
BASIC_MODEL:
  base_url: https://ark.cn-beijing.volces.com/api/v3
  model: "deepseek-v3-250324"
  # model: "doubao-1-5-pro-32k-250115"
  max_tokens: 8192


REASONING_MODEL:
  base_url: https://ark.cn-beijing.volces.com/api/v3
  model: "doubao-1-5-thinking-vision-pro-250428"
  max_tokens: 8192

VISION_MODEL:
  base_url: https://ark.cn-beijing.volces.com/api/v3
  model: "doubao-1-5-thinking-vision-pro-250428"
  max_tokens: 8192

Sakura4036 avatar May 16 '25 07:05 Sakura4036

Could you provide which model you are using?

AGENT_LLM_MAP: dict[str, LLMType] = {
    "coordinator": "basic",
    "planner": "basic",
    "researcher": "basic",
    "coder": "basic",
    "reporter": "basic",
    "podcast_script_writer": "basic",
    "ppt_composer": "basic",
    "prose_writer": "basic",
}
BASIC_MODEL:
  base_url: https://ark.cn-beijing.volces.com/api/v3
  model: "doubao-1-5-pro-32k-250115"

Ritian-Li avatar May 16 '25 07:05 Ritian-Li

+1

openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 131072 tokens. However, you requested 170799 tokens (170799 in the messages, 0 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}} During task with name 'agent' and id '544dd79f-df3b-291e-b26d-b11f850ccdc7' During task with name 'researcher' and id '0c0c2341-b9bd-9784-a0cf-1ddcd9fe5a02'

sunming66 avatar Aug 28 '25 08:08 sunming66