BettaFish icon indicating copy to clipboard operation
BettaFish copied to clipboard

[Query Engine Streamlit App] 研究过程中发生错误: Error code: 400 - Content Exists Risk

Open arsuili opened this issue 2 months ago • 4 comments

错误信息

研究过程中发生错误: Error code: 400 - {'error': {'message': 'Content Exists Risk', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}

错误详情

Traceback (most recent call last):
  File "/www/*/BettaFish-main/SingleEngineApp/query_engine_streamlit_app.py", line 154, in execute_research
    agent._reflection_loop(i)
    ~~~~~~~~~~~~~~~~~~~~~~^^^
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/agent.py", line 392, in _reflection_loop
    self.state = self.reflection_summary_node.mutate_state(
                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        reflection_summary_input, self.state, paragraph_index
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/nodes/summary_node.py", line 368, in mutate_state
    raise e
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/nodes/summary_node.py", line 353, in mutate_state
    updated_summary = self.run(input_data, **kwargs)
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/nodes/summary_node.py", line 284, in run
    raise e
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/nodes/summary_node.py", line 271, in run
    response = self.llm_client.stream_invoke_to_string(
        SYSTEM_PROMPT_REFLECTION_SUMMARY,
        message,
    )
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/llms/base.py", line 148, in stream_invoke_to_string
    for chunk in self.stream_invoke(system_prompt, user_prompt, **kwargs):
                 ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/llms/base.py", line 132, in stream_invoke
    raise e
  File "/www/*/BettaFish-main/SingleEngineApp/../QueryEngine/llms/base.py", line 118, in stream_invoke
    stream = self.client.chat.completions.create(
        model=self.model_name,
    ...<2 lines>...
        **extra_params,
    )
  File "/root/miniconda3/envs/*/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/*/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create
    return self._post(
           ~~~~~~~~~~^
        "/chat/completions",
        ^^^^^^^^^^^^^^^^^^^^
    ...<46 lines>...
        stream_cls=Stream[ChatCompletionChunk],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/root/miniconda3/envs/*/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/miniconda3/envs/*/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Content Exists Risk', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}

环境信息

  • 应用: Query Engine Streamlit App
  • 时间: 2025-11-09 20:33:54

arsuili avatar Nov 09 '25 12:11 arsuili

等了半天 给这种错误卡死了…… 又要重新跑一遍?

arsuili avatar Nov 09 '25 12:11 arsuili

Hi @arsuili! I'm Dosu and I’m helping the BettaFish team.

这个“Content Exists Risk”错误是因为 LLM 服务商的内容风控机制把请求判定为高风险或包含敏感内容,直接拒绝了请求,导致流程中断相关说明。目前 Query Engine Streamlit App 没有自动断点续跑或自动保存进度的功能,遇到这种错误只能重新跑一遍,之前的进度不会自动恢复代码实现细节

如果你经常遇到类似问题,可以考虑手动保存中间状态(比如用 SAVE_INTERMEDIATE_STATES 配置),或者在每次大段落处理前手动备份 state 文件,出错后用 load_state 恢复再从出错段落继续开发(需要一定代码操作)。目前主流程遇到 API 400 错误会直接中断,暂时没有自动跳过或重试机制。

如果想减少触发风控的概率,可以尝试优化或规避 prompt 内容,避免敏感词或重复内容。近期有合并一些错误处理和结果验证的改进,但还没有自动断点续跑的实现PR #225

总结:现在遇到这种错误只能重新跑,建议后续关注项目更新或自行实现断点续跑/自动保存机制。

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Join Discord Share on X

dosubot[bot] avatar Nov 09 '25 12:11 dosubot[bot]

目前有在请求失败之后重试的机制, https://github.com/666ghj/BettaFish/pull/257 ,但审查问题只会导致不停重试。

因此,更换非审查LLM是首选解决方案。关于状态持久化的确是一个改进方向。

DoiiarX avatar Nov 09 '25 12:11 DoiiarX