NextChat icon indicating copy to clipboard operation
NextChat copied to clipboard

[Bug] 回复streaming会断掉

Open Reekin opened this issue 11 months ago • 10 comments

Bug Description

今天开始,用claude3和gpt4都是生成到一半就自己断了,甚至重新build了旧版本(2.11.2)也还存在 之前没有遇到过

Steps to Reproduce

发送能让ai能回复得长一点的问题

Expected Behavior

必现:提前中止streaming

Screenshots

image

Deployment Method

  • [ ] Docker
  • [ ] Vercel
  • [ ] Server

Desktop OS

Windows 10

Desktop Browser

Chrome

Desktop Browser Version

122.0.6261.129

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Reekin avatar Mar 20 '24 11:03 Reekin

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] Reply streaming will be interrupted

Issues-translate-bot avatar Mar 20 '24 11:03 Issues-translate-bot

same problem

AiharaMahiru avatar Mar 20 '24 17:03 AiharaMahiru

are you using the deployment on Vercel or other platform? could you provide the console log from both browser and server side to help troubleshooting the issue?

fred-bf avatar Mar 21 '24 10:03 fred-bf

Windows 10 Nextchat 2.11.3 碰到一样的问题,ChatGPT-4 稍微长一点的回答(200 字左右)没回答完就断掉

hiforrest avatar Mar 25 '24 09:03 hiforrest

Bot detected the issue body's language is not English, translate it automatically.


Windows 10 Nextchat 2.11.3 Encountered the same question, ChatGPT-4’s slightly longer answer (about 200 words) was cut off before the answer was finished.

Issues-translate-bot avatar Mar 25 '24 09:03 Issues-translate-bot

Same as yours. Im trying ChatGLM3 API, and when the service is a little busy, chat with Next-Web would be interrupted...

CNYoki avatar Mar 30 '24 08:03 CNYoki

@Reekin would you mind try configure the max_tokens config in the settings page? not sure whether the response message is being cut off because of the default max_tokens value

fred-bf avatar Apr 09 '24 08:04 fred-bf

@Reekin would you mind try configure the max_tokens config in the settings page? not sure whether the response message is being cut off because of the default max_tokens value

my max_tokens was 4000. This issue did not recur later.

Reekin avatar Apr 21 '24 17:04 Reekin

我发现流式输出动画有关的代码可能导致输出中断。具体来说是 app/client/platforms/openai.ts (其他平台类似)下有一个 animateResponseText() 函数,它通过 requestAnimationFrame 递归调用自身(requestAnimationFrame(animateResponseText))。在文本比较长的时候,偶尔会一个没有 catch 的错误 Maximum update depth exceeded 抛出,于是相应请求的后续逻辑都不会执行。这个错误似乎是框架的一种防御机制。

简单地将相应报错 catch 起来(印象中最精细的 try{}catch{} 应该是包裹 animateResponseText 调用的 onUpdate 的 get().updateCurrentSession),似乎规避问题,且不影响输出。不过我最后在我的 fork 完全删除了 animateResponseText() 函数,只在收到一条新消息时(onmessage)请求一次动画帧,这样完全避免嵌套。ref: https://github.com/shansing/ChatGPT-Next-Web/commit/d81fdbf1df5485192141ca1ff6efc3f02f037a9b#diff-6de03d672a0a7c506b48a06a6bec2b3763d7e29f9b9c4fbd490fa2e177fbb01fR264

shansing avatar Jun 09 '24 14:06 shansing

Hello! Can the problem be solved? the problem is still reproducing, and the answer is cut off :(

Xeelix avatar Jul 23 '24 22:07 Xeelix