Flowise icon indicating copy to clipboard operation
Flowise copied to clipboard

No Response on Initial Message

Open amansoniamazatic opened this issue 9 months ago • 14 comments

Hii @HenryHengZJ

Screenshot from 2024-04-30 12-47-21 Sometimes, in the Flowise embedded chat, when a user types a message for the first time, there is a loading period in the chatbot, but then it does not produce any response. Please check the screenshot provided. When the user types a response again, then it comes.

My guess is that the WebSocket takes time to connect initially, and during that time, streaming is not started. For such cases, if the WebSocket is not connected, a direct response should be given. Please check this issue and correct it.

amansoniamazatic avatar Apr 30 '24 07:04 amansoniamazatic

For me, I get the data in the response, but the UI isn't updated:

image

If I close and then open the UI, it appears: image

birksy89 avatar Apr 30 '24 16:04 birksy89

@birksy89 same here, i also get the data in the response but Embed Chat UI is not updated. Hey @HenryHengZJ please check this bug.

amansoniamazatic avatar May 02 '24 04:05 amansoniamazatic

@amansoniamazatic - Perhaps you could update the title of this issue to be something a bit more descriptive?

I'd also like to note that this issue only appears to be the case on my local development copy.

After deploying to Railway via Docker, the "production" copy doesn't seem to encounter the same issue.

I have altered the React/Embed libraries in the past - But have also synced these to the latest version.

It's possible that they aren't installed correctly / or are cached on my development version - And working ok in production?

Hopefully if the title is altered, more people with the same issue may contribute with their findings 😃

birksy89 avatar May 02 '24 08:05 birksy89

Hi @birksy89 , I deployed Flowise on AWS EC2, and I think it's the best service for the cloud, but this issue is occurring. I also used the Flowise API and built a chatbot in React Native, and faced the same issue because the WebSocket takes time to connect initially. During that time, streaming is not started. To fix this, I added a condition to check if the WebSocket has started. If not, then I give a direct response; else, if it has started, it gives a streaming response. That application is working fine, but for the embedded chat UI for the web, the issue is still happening. I also want to mention that I am not getting any issues when I directly access Flowise and go to the given flow and do some chatting, but when I embed a chatbot on a website, that's when I face this issue. So the main issue is in the embedded chat UI also i changed the title of this issue.

amansoniamazatic avatar May 02 '24 09:05 amansoniamazatic

hey, as the title says, is it only for the first initial response? yeah its largely due to the socket not establishing connection successfully yet

HenryHengZJ avatar May 04 '24 16:05 HenryHengZJ

hey, as the title says, is it only for the first initial response? yeah its largely due to the socket not establishing connection successfully yet

Hey @HenryHengZJ , to address the initial WebSocket connection delay, perhaps we could explore optimizing the connection process for faster establishment. Additionally, implementing a fallback mechanism to provide a direct output if the WebSocket isn't connected at that moment might ensure a smoother user experience during the loading period.

amansoniamazatic avatar May 07 '24 04:05 amansoniamazatic

@amansoniamazatic Thanks for raising this. I've experienced this issue as well but not only on the initial message; sometimes just intermittently as the chat continues.

thigarette avatar May 08 '24 00:05 thigarette

@HenryHengZJ .Many people are facing this issue. Could you please fix it as soon as possible.

amansoniamazatic avatar May 08 '24 07:05 amansoniamazatic

I think the issue was socketio connection, I was not able to reproduce on my side, might be due to internet connection, or CPU RAM I'm not sure.. we are working on using SSE instead of socketio, which may solve this issue

HenryHengZJ avatar May 08 '24 22:05 HenryHengZJ

@HenryHengZJ , I understand that, but in the meantime, implementing a fallback mechanism to provide a direct output if the WebSocket isn't connected at that moment might ensure a smoother user experience.

amansoniamazatic avatar May 09 '24 04:05 amansoniamazatic

Are you using self-hosted LLM models or using something like OpeAI? In my case, I'm facing a similar issue. I believe it relates to the fact that I'm running an Ollama locally self-hosted model, which may not be as fast and responsive as OpenAI.

Flowise needs to handle these situations appropriately. It can be beneficial for the Development process and be independent and a cost perspective to ensure people can run their models locally even if they are slower and more responsive than commercially available APIs & Models.

Some RAG/Agrens may respond slowly, and that's fine; not everything should be in a "chat conversation form," I would be okay to wait for 1 or, in some cases, even 10 minutes for a response, that generated and returned to me asynchronously, for example in an email, which would be okay for me. I believe the Flowise platform should let people build flows that return asynchronous responses.

qdrddr avatar May 10 '24 17:05 qdrddr

@HenryHengZJ @qdrddr I've experimented with both Ollama and OpenAI models. While they perform well on my local machine with sufficient GPU and RAM resources, I encountered issues when deploying Flowise on AWS EC2 free instances. The lack of resources, particularly with only 1GB of RAM and 1 CPU, seemed to impact the responsiveness of the WebSocket connection, causing delays in UI responses despite always receiving responses in the logs.

To address this, I migrated the same Flowise application to HuggingFace's free hosting space, where I observed improved performance. HuggingFace offers 2 CPUs and 16GB of RAM for free, which significantly alleviated the initial message delay issue. However, it's worth noting that HuggingFace discontinues service if there's no activity for two days. To mitigate this, I've implemented a cronjob to keep the service active by triggering it every 12 hours.

amansoniamazatic avatar May 13 '24 05:05 amansoniamazatic

@HenryHengZJ , After reviewing the Flowise embed code, I suggest adding an extra condition in:

https://github.com/FlowiseAI/FlowiseChatEmbed/blob/83576e276a6940c916bded323e4a8bd4a76f0475/src/components/Bot.tsx https://github.com/FlowiseAI/FlowiseChatEmbed/blob/83576e276a6940c916bded323e4a8bd4a76f0475/src/queries/sendMessageQuery.ts The condition should check if the websocket is not connected, and if so, produce the output directly. This adjustment could help alleviate the issue to some extent, enabling users with lower CPU and RAM to use it smoothly, as the current Flowise only works well with good CPU and RAM.

Note: I also checked your Digital Ocean video here https://youtu.be/nchOuARbqEk?si=yKvtNbsI9DrX0Hfb and followed the same steps, facing the same issue of websocket connection delay. Thus, I believe addressing this should be a priority.

amansoni7477030 avatar May 14 '24 09:05 amansoni7477030

I think the issue was socketio connection, I was not able to reproduce on my side, might be due to internet connection, or CPU RAM I'm not sure.. we are working on using SSE instead of socketio, which may solve this issue

I've deployed on GCP using this guide and everything has worked perfectly except for the response display issue, which just happens randomly (sometimes the initial message, other times just somewhere in between).

Example 1 (Flowise Docs Chatbot) Screenshot from 2024-05-16 11-06-04

Example 2 (Simple General Chatbot) Screenshot from 2024-05-16 12-11-41

I think I can rule out inadequate CPU and RAM as a cause because I allocated 2 CPUs and 4GB RAM to the pod. I've also tested with multiple LLM providers OpenAI, Azure OpenAI, VertexAI and the behaviour is the same. The only time the issue doesn't occur at all is when testing locally with npx flowise start, which made me think the issue was my network. When I exposed the local version to the internet via a tunnel (with ngrok) though, the issue still didn't occur, meaning it's probably deployment related. I just have no idea what specifically. Hope SSE solves the issue!

thigarette avatar May 16 '24 10:05 thigarette

@HenryHengZJ any update ?

amansoni7477030 avatar Jun 10 '24 08:06 amansoni7477030

Having the same issue. Deployed at Contabo VPS. Once the responses worked, but now, none of the messages returns to the screen

carrati avatar Oct 10 '24 22:10 carrati