feat: ui build in one single http request
Problem
Currently the backend that serves the frontend can't be scaled up to use multiple workers. The main limitation is that the playground builds the component with multiple requests, expecting to always hit the same backend. The current mitigation is to put a distributed cache between the backend (redis the only one supported). While this is doable, it adds a lot of complexity both from the coding and operational perspective.
Solution
This pull request introduces a new endpoint for building the flow in the playground in a single http request, removing the need for the state to be distributed across workers. The main requirements for this endpoint are:
- Provide "updates" to the client after each node has been built.
- Being able to be stopped if the client decides so (e.g. infinite flow loops)
To fullfill this request, the endpoint serves the response in streaming-like fashion. (NDJSON format) After each vertex is built a JSON containing the build details is sent to the client, that reflects the state in the UI. If the client interrupts the connection, all the build tasks are stopped.
All build layers is now calculated server-side only, this improves testability and performance. For each layer, each vertex is built in parallel using asyncio.
The final result is that the build is faster (mainly due to removed network overhead); from user perspective sometimes is too fast that components seem to be skipped (they get the "in-progress" state for a couple of milliseconds). To improve the user experience the UI calculates the delta between the actual progress time, making each component to enter the "in-progress" state for at least 300 milliseconds. This gives to the user the progressive build feeling in all the cases.
Since the frontend now calls a different endpoint, it'd be a breaking change (e.g. you upgrade frontend first and then backend returns 404 since it doesn't know that endpoint yet). To overcome that, the old logic is still present and used as fallback. However it's recommended to run the same version for the frontend and backend.
One implementation detail is that axios doesn't support streaming responses and I had to use fetch , which doesn't pass through the common interceptors and could lead to unexpected error handling. (however they should be handled correctly)
Discarded alternatives
- using SSE instead of NDJSON streaming: there's a huge limitation in modern browser for
EventSourceusing http 1.1. To make langflow more predictable, I've discarded this solution. - Using a file-based cache in the backend. While this could have helped with multiple workers, it would't have had any effect in deployment with machine-isolated backends (e.g. Kubernetes pods)
Pull Request Validation Report
This comment is automatically generated by Conventional PR
Whitelist Report
| Whitelist | Active | Result |
|---|---|---|
| Pull request is a draft and should be ignored | ✅ | ✅ |
| Pull request is made by a whitelisted user and should be ignored | ❌ | ❌ |
| Pull request is submitted by a bot and should be ignored | ✅ | ❌ |
| Pull request is submitted by administrators and should be ignored | ❌ | ❌ |
Result
Pull request matches with one (or more) enabled whitelist criteria. Pull request validation is skipped.
Last Modified at 29 Jul 24 11:52 UTC
This pull request is automatically being deployed by Amplify Hosting (learn more).
Access this pull request here: https://pr-3020.dmtpw4p5recq1.amplifyapp.com
hey @nicoloboschi, I tried to run FE tests locally, It seems like is missing the session after building the flow:
Before:
After:
@Cristhianzl nice catch! This has been fixed now
@nicoloboschi
We have a feature called "Freeze Path", when I'm trying to run a flow with this activate, It throws me an error:
Could you please check for us?
@Cristhianzl I've fixed it. I've also found and fixed a related bug in another PR - https://github.com/langflow-ai/langflow/pull/3158
@nicoloboschi the frontend e2e tests passed (good job!)
Just one point, I tried to run backend tests and I've got a few errors. could you please run "make tests" and check for us? thank you!