NextChat
NextChat copied to clipboard
Azure gpt-4-turbo-2024-04-09 is not responding, The response is truncated or interrupted. [Bug]
Bug Description
In the example, when asking the model to define the image, the model does not respond completely, only a fraction of the response is displayed. I have adjusted the maximum tokens generation parameter to check if that was the issue, but it still doesn't work. I also modified the parameter in the model deployment on Azure, but I am still getting the same result. This leads me to think that it might be an issue with the UI and not the model configuration.
Steps to Reproduce
Normally using the model.
Expected Behavior
A complete answer.
Screenshots
Deployment Method
- [ ] Docker
- [ ] Vercel
- [ ] Server
Desktop OS
Windows 10, Desktop app. v1.12.2
Desktop Browser
No response
Desktop Browser Version
No response
Smartphone Device
No response
Smartphone OS
No response
Smartphone Browser
No response
Smartphone Browser Version
No response
Additional Logs
No response
Same problem here, however if I change the model to gpt-4-vision-preiview, the gpt-4-turbo-2024-04-09 can respond normally.
Can you provide the browser console screenshot after desensitization
Can you provide the browser console screenshot after desensitization
Sure @Dean-YZG
Bot detected the issue body's language is not English, translate it automatically.
Can you provide the browser console screenshot after desensitization
Sure @Dean-YZG
I just solved the same problem; I hope this can help you guys.
change the code of isVisionModel make sure your model name is included
change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included