🐛 fix: Avoid forced smoothing when enable client fetch
💻 变更类型 | Change Type
- [ ] ✨ feat
- [x] 🐛 fix
- [ ] ♻️ refactor
- [ ] 💄 style
- [ ] 👷 build
- [x] ⚡️ perf
- [ ] 📝 docs
- [ ] 🔨 chore
🔀 变更说明 | Description of Change
虽说是考虑 #3800 ,但不应该一棒子打死。这边认为 smoothing 专用于缓慢卡顿输出的情况, 而不少国内模型开启客户端请求后事实上非常流畅,被此处强制启用平滑输出反而影响观感与输出速度。 (尤其是 sliconcloud 等)
📝 补充信息 | Additional Information
@sxjeru is attempting to deploy a commit to the LobeHub Pro Team on Vercel.
A member of the Team first needs to authorize it.
👍 @sxjeru
Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.
非常感谢您提出拉取请求并为我们的社区做出贡献,请确保您已经遵循了我们的贡献指南,我们会尽快审查它。
如果您遇到任何问题,请随时与我们联系。
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 92.07%. Comparing base (
c9f00e5) to head (207b807). Report is 32 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3904 +/- ##
========================================
Coverage 92.07% 92.07%
========================================
Files 460 460
Lines 31543 31545 +2
Branches 3148 2999 -149
========================================
+ Hits 29044 29046 +2
Misses 2499 2499
| Flag | Coverage Δ | |
|---|---|---|
| app | 92.07% <100.00%> (+<0.01%) |
:arrow_up: |
| server | 97.36% <ø> (ø) |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
直接去掉肯定不行
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
It's definitely not possible to remove it directly.
那个 issue 是只提到 azure openai 有这个问题吧。 感觉是手动在模型配置那边启用会更好些。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
That issue only mentioned that Azure Openai has this problem.
那个 issue 是只提到 azure openai 有这个问题吧。 感觉是手动在模型配置那边启用会更好些。
确实只有az的openai回复是一段一段的,openai官方是一个字一个字的
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
That issue only mentioned that Azure Openai has this problem. It feels like it would be better to enable it manually in the model configuration.
It is true that only az’s openai reply is paragraph by paragraph, while openai’s official reply is word by word.
那个 issue 是只提到 azure openai 有这个问题吧。 感觉是手动在模型配置那边启用会更好些。
确实只有az的openai回复是一段一段的,openai官方是一个字一个字的
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
That issue only mentioned that Azure Openai has this problem. It feels like it would be better to enable it manually in the model configuration.
It is true that only az’s openai reply is paragraph by paragraph, while openai’s official reply is word by word.
@sxjeru 如果只有 azure 有这个问题的话,倒是的确可以只给 azure 开
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
@sxjeru If only azure has this problem, you can indeed open it only for azure
不对,我仔细看了下那个 issue,这个接口是逐字流式返回的。如果还卡顿的话,说明不仅仅是 azure 的问题
我这边其实比较怀疑 #3800 的问题没有被 #3820 修复。希望当事人可以给个反馈。
目前的 smoothing 能处理的是间断性吐大段文字的卡顿情况吧。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
I actually doubt that the problem in #3800 has not been fixed by #3820. I hope the parties concerned can give feedback.
The current smoothing can deal with the lag situation of intermittently spitting out large paragraphs of text.