chainlit
chainlit copied to clipboard
UI extremely slow for large prompts
trafficstars
Describe the bug
When providing a large prompt (in my example case 40k tokens), the UI is very slow to stream back the response. The python backend logic has long finished the on_message hook, but the UI seems to struggle to render the response. In my case, rendering at less than 1 token per second - unusable at this point.
To Reproduce
- Paste a large prompt into the input box. For example by copy pasting the contents of a large file.
- Press enter and see that the render speed of the UI is way slower than the streaming speed of the LLM.
Expected behavior I would expect that the UI is able to keep up with the streaming speed of the LLM.
Desktop
- OS: Windows 11
- Browser: Microsoft Edge -Chainlit Version: 2.8.3