Research task: Create a microbenchmark setup to test the efficiency of WebSockets vs HTTP2/3 + SSE
In order to get an idea how best to proceed with #233, it would be good to have ballpark numbers of the performance characteristics of WebSockets vs HTTP requests + Server-Sent Events for our needs.
Setup
We can get this data - completely decoupled from the internals of Jazz - by creating some synthetic microbenchmarks.
- We need a client & server setup that can handle WebSockets, HTTP 2 & 3 and Server-Sent Events.
- The client should be in TypeScript, using only Browser-native APIs (WebSocket, fetch, EventSource)
- For servers, I would like to evaluate
- A) a plain Node.JS server (or higher-level wrapper of that)
- B) uWebSockets.js, for both web sockets and HTTP3 requests
- C) plain Node.JS server behind a Caddy proxy that handles HTTP3 & SSL
- all of them should be single threaded, on a single node
Simulation details
Original data use that needs to be simulated
Currently, Jazz uses WebSockets to sync CoValue state between the client and syncing/persistence server.
The communication typically consists of three scenarios:
- Loading a CoValue
- The client doesn't have a CoValue's edit history or only has it partially and requests the rest from the sync server.
- The sync server responds with the missing parts of the CoValue edit history
- Creating a CoValue
- The client creates a new CoValue locally and wants to upload it completely to the sync server
- Subscribing to and mutating a CoValue
- A client is interested in realtime changes to a CoValue and subscribes to it
- The server sends individual changes to the CoValue to the client as they happen
- The client sends individual changes to the CoValue to the server when the user modifies the CoValue locally
Websockets vs Requests & SSE
Currently, 1., 2. and 3. happen over WebSockets, with 1 package per request/response/incoming update
For using Requests and SSE instead, we would use Requests & Responses for 1. and 2., while for 3. we listen to incoming updates with Server-Sent Events and publish outgoing updates as a Request with no expected Response.
Simulation spec
There are roughly two classes of CoValues: structured CoValues (thousands of <50 byte edits) and binary-data CoValues (few edits that are each 100kB).
Since we are only interested in the data transmission performance, we can model the scenarios using packets containing random data:
- Loading a CoValue
- structured: A 100 byte request from the client and a 5kB response from the server
- binary: A 100 byte request from the client and a 50MB response from the server, streamed in 100kB chunks
- Creating a CoValue
- structured: A 10kB request from the client and a 10 byte response from the server
- binary: A 50MB request from the client, streamed in 100kB chunks and a 10 byte response from the server
- Subscribing to and mutating a CoValue
- structured: 50 byte incoming SSE messages/WebSocket packets, mutations are 50 byte outgoing messages as a request/WebSocket packet
- assume one client creating a mutation that is published to 10 other clients
- binary: 100kB incoming SSE messages/WebSocket packets, mutations are 100kB outgoing messages as a request/WebSocket packet
No extra HTTP headers should be set (other than what browser set by default, and these should be minimised if possible)
Target metrics
The main variables we are interested in are
- Loading a CoValue
- a) How many CoValues can be "simulation loaded" at once on a client? Do we get head-of-line blocking effects? (you can do structured/binary separately)
- b) How many clients can "simulation load" a CoValue at the same time per server?
- Creating a CoValue
- c) How many CoValues can be "simulation created" at once on a client? Do we get head-of-line blocking effects? (you can do structured/binary separately)
- d) How many clients can "simulation create" a CoValue at the same time per server?
- Subscribing to and mutating a CoValue
- e) How many updates per second can we push from a client?
- f) How many updates per second can a server handle (receive and broadcast to interested clients)?
- g) What data throughput can we achieve for binary CoValue updates?
- h) What's the latency like
Variables
It would be good to get results for the metrics above assuming
Different network conditions
- I Ideal network conditions
- II 4G speeds
- III 3G speeds
- IV connections with high packet loss, including so bad that we need to reconnect WebSockets
Different protocols
- WS: WebSockets only (with reconnect on timeout)
- H2: HTTP2 + SSE
- H3: HTTP3 + SSE
You don't need to actually deploy a server anywhere if you can simulate these conditions locally, just make sure to note down your hardware specs and use exactly one thread/core for the server
Dimensions summary
So in total we have the following dimensions:
- Server tech: A, B and C
- Target metrics: a), b), c), d), e), f), g), h)
- Network condition: I, II, III, IV
- Protocols: WS, H2, H3
Deliverable
- Create a completely new project in a new folder in this monorepo called 'experiments'
- Create an independent pnpm workspace in there
- Use whatever test-harness / benchmarking libraries / node setups / browser automation you think are best
- Make sure the setup is replicable / can be run by anyone
- Post results from your machine in a README in the same folder
I realise this spec is a lot, so feel free to ask lots of clarifying questions before & after accepting the task!
/bounty $2000
💎 $3,000 bounty • Garden Computing
Steps to solve:
- Start working: Comment
/attempt #301with your implementation plan - Submit work: Create a pull request including
/claim #301in the PR body to claim the bounty - Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts
Thank you for contributing to garden-co/jazz!
| Attempt | Started (UTC) | Solution | Actions |
|---|---|---|---|
| 🟢 @ayewo | Aug 12, 2024, 04:59:43 PM | #597 | Reward |
Hi @aeplay
I see your project just joined Algora, so welcome!
Different projects pick different styles of working so I'm curious, how do you want attempts at this bounty to shake out:
- first-come first served i.e. the first dev to ask nicely gets assigned to work on the issue;
- battle royale i.e. let multiple devs attempt and you'll evaluate PRs as they roll in?
The risk with style 1. is that the assigned dev might take too long to show progress (if they are inexperienced or experienced but busy with a day job).
The risk with style 2. is since there is only 1 bounty reward, anyone willing to work on this risks getting blind-sided by other devs who open PRs to claim the bounty.
Hey @ayewo good question! This is what I'm most curious about in this bounty model as well.
For this task I would say first-come first-serve, since it is quite a detailed project and I would hate anyone to waste their effort. There is no super urgent deadline for it, so I would be happy to let the first serious contender iterate on it with my input.
Excellent! In that case, I'd like to /attempt #301 this.
| Algora profile | Completed bounties | Tech | Active attempts | Options |
|---|---|---|---|---|
| @ayewo | 22 bounties from 5 projects | TypeScript, Rust, JavaScript & more |
Cancel attempt |
@ayewo yes! Let's gooo
Please ask for clarifications here, I'm in GMT+1 and mostly available during normal work hours, but also during other times on my phone for quick answers
@aeplay I'm also in GMT+1 :) and just joined your Discord.
I take it you prefer clarifications happen here in the open, right?
yes please, don't worry about making this issue noisy, that's what it's for
Roger that. Would appreciate it if you can assign the issue to me, otherwise there will be drive-by attempts from other devs feigning ignorance of our conversation above.
done, thanks for walking me through this
Hi @aeplay, I'm interested in the project's client and ci/cd workflow related part, if its okay can we split the project @ayewo?
Hey @DhairyaMajmudar I appreciate your offer but would like to keep this focused on one person attempting it. Thank you!
Hey @DhairyaMajmudar I appreciate your offer but would like to keep this focused on one person attempting it. Thank you!
That's fine!
And @ayewo just clarifying: there is no CI/CD aspect to this project - it's all meant to be run manually.
@aeplay Yes, understood. You want this to be single-thread and launched locally.
(I built microbenchmark recently using a combination of PowerShell (on Windows) and Bash (on Linux) but they were each executed remotely on EC2 instances using Terraform.)
Hint: Next, time You May want to Keep Applications a Bit Longer open so You Can Evaluate a few applicants, it doesn't have to be, first come first serve or Battle Royale.
Hint: Next, time You May want to Keep Applications a Bit Longer open so You Can Evaluate a few applicants, it doesn't have to be, first come first serve or Battle Royale.
Makes sense, but this time I wanted to move quickly and @ayewo seemed eager and capable so I just went with him
I'd like to share some progress on the research I've done so far and ask a few questions.
I looked into the HTTP protocol versions supported by servers A, B, and C and it seems that only Caddy supports all three versions of the HTTP protocol natively (i.e. HTTP/1.1, HTTP/2 and HTTP/3).
HTTP Versions Supported by Web Servers[^1]
| # | Server | HTTP/1.1 | HTTP/2 | HTTP/3 |
|---|---|---|---|---|
| A | Node.js v22.6.0 | ✅ | ✅ | ❌ |
| B | uWebSockets.js v20.47.0 | ✅ | ❌ | ⚠️ (experimental, development paused) |
| C | Caddy v2.8.4 | ✅ | ✅ | ✅ (and HTTP/2 over cleartext (H2C)) |
Node.js doesn't yet suppport HTTP/3 natively but I came across a 3rd party repo (https://github.com/endel/webtransport-nodejs) that claims to offer HTTP/3 support but I didn't look too closely.
Since you also want to test against 3 different protocols:
- WS: WebSockets only (with reconnect on timeout)
- H2: HTTP2 + SSE
- H3: HTTP3 + SSE
I tried to map servers A, B, C to the 3 protocols to see what is possible:
Web Server to Web Protocol Mapping
| # | Server | Layer 7 | Layer 6[^2] | Layer 4 | Supported |
|---|---|---|---|---|---|
| A1 | Node.js | HTTP/1.1 + WebSocket (WS) | TLSv1.3 (Optional) | TCP | ✔️ |
| A2 | HTTP/2 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | ✔️ | |
| A3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ❌ | |
| B1 | uWebSockets.js | HTTP/1.1 + WS | TLSv1.3 (Optional) | TCP | ✔️ |
| B2 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | ❌ | |
| B3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ❌ | |
| C1 | Caddy | HTTP/1.1 + WS | TLSv1.3 (Optional) | TCP | ✔️ |
| C2 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | ✔️ | |
| C3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ✔️ |
Questions
-
Is my understanding of the server to protocol mapping correct?
-
In the Simulation spec section, you wrote:
- Subscribing to and mutating a CoValue
- structured: 50 byte incoming SSE messages/WebSocket packets, mutations are 50 byte outgoing messages as a request/WebSocket packet
- assume one client creating a mutation that is published to 10 other clients
For the actual test, does this imply that after each server is started, 10 clients will be spawned that will subscribe to a CoValue, then 1 client will mutate the CoValue triggering a notification by the server to those 10 clients?
[^1]: The emojis are also link to relevant docs. [^2]: HTTP and TLS are both layer 4 protocols in the TCP/IP model but I opted for the OSI model here to keep things clear.
Hey @ayewo, thanks for sharing your research results in such a well-structured format.
-
It matches what I was aware of. For uWebsockets.js please can you try the experimental HTTP3 support and let me know how it goes?
-
Your understanding is correct, and just to be clear, I am not expecting you to do anything with Jazz/actual CoValues, we are just simulating their traffic patterns by sending (client -> server) and then broadcasting (server -> 10 clients) random data
Some more clarifications:
- In all cases I'd like you to contrast using only WebSockets (for outgoing requests and their responses, and incoming notifications) vs HTTP (for outgoing requests and their responses) + SSE (incoming notifications)
- uWebsockets should support HTTP2, right?
- With Caddy, I'm only really interested in using it as a HTTP3-handling reverse proxy in front of Node.JS
So the full mapping would look like this
Web Server to Web Protocol Mapping
| # | Server | Layer 7 | Layer 6 | Layer 4 | Port | Supported |
|---|---|---|---|---|---|---|
| A1 | Node.js | WebSockets only | TLSv1.3 (Optional) | TCP | 3001 | ✔️ |
| A2 | HTTP/1 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | 3002 | ✔️ | |
| A3 | HTTP/2 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | 3003 | ✔️ | |
| A4 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ~~3004~~ | ❌ | |
| B1 | uWebSockets.js | WebSockets only | TLSv1.3 (Optional) | TCP | 4001 | ✔️ |
| B2 | HTTP/1 + SSE | TLSv1.3 (Optional) | TCP | 4002 | ✔️ | |
| B3 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | 4003 | ✔️ | |
| B4 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | 4004 | ⚠️ (try) | |
| C1 | Caddy (in front of Node.JS) | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | 5001 | ✔️ |
Thanks for the confirmation.
- OK. I make a note to investigate HTTP/3 support in uWebsockets.js.
- Understood.
Please note that I have updated the 2nd table to remove the port numbers. I imagine each of the servers A,B & C will be started as a standalone process so they could simply listen on the same port i.e. localhost:3000 instead of listening on individual ports localhost:3001, localhost:4001 etc that I originally used.
- Yes, this was in fact one of my follow-up questions. I imagine that the set-up in A1 is essentially your baseline i.e. what you are currently using today. The others A2-A3, B1-C3 and C1-C3 are what the synthetic benchmark would be uncovering, correct?
- From my research, it seems uWebsockets.js doesn't support HTTP/2 at all, only HTTP/1.1.
- Understood. It was super clear in your original description that Caddy would serve as a reverse proxy.
yeah makes sense re ports - we can run the different cases in succession
Just double checked re uWebSockets and HTTP2 - you're right, that's surprising. Remove that case then, but try HTTP1 + SSE, please
- Yes, this was in fact one of my follow-up questions. I imagine that the set-up in A1 is essentially your baseline i.e. what you are currently using today. The others A2-A3, B1-C3 and C1-C3 are what the synthetic benchmark would be uncovering, correct?
This is exactly the case, correct
Another question: I want to assume all protocol combinations will use TLS in the benchmarks? TLS is optional in HTTP/1.1 and HTTP/2 (h2c) but in HTTP/3 it will not work over plaintext which is why it is the only web protocol where TLS use is mandatory.
yes please assume and use TLS for everything (local certs are ok), because one thing I am interested in is how long it takes to bootstrap a connection - which is most noticed on interrupted connections. I'm expecting Websockets + TLS to be the longest and HTTP3 + SSE + TLS to be the fastest in this regard.
Got it.
More questions.
The Simulation spec talks about simulating the transfer of structured and binary data. But looking at the main differences (source) between WebSockets and SSE in the table below:
| WebSockets | Server-Sent Events |
|---|---|
| Two-way message transmission | One-way message transmission (server to client) |
| Supports binary and UTF-8 data transmission | Supports UTF-8 data transmission only |
| Supports a large number of connections per browser | Supports a limited number of connections per browser (six) |
-
SSE only supports UTF-8 data transmission. I guess for SSE, this implies the use of base64 to encode and decode binary each way?
-
What about the 50MB limit? It is the final payload size prior to being base64 encoded?
-
For the client that will interact with the sync server using browser-native APIs (WebSocket, fetch, EventSource), is using a (headless) Chrome instance from
playwrightsufficient? Or you want the browser client to be configurable? In other words, the tester gets to use only Chrome or they can pick from any of the browsers supported byplaywrighti.e. Chrome, Edge, Safari (WebKit) or Firefox, as long as those browser-native APIs are properly supported?
- Use base64 encoding everywhere
- 50mb prior to encoding
- Just chrome is fine
Use base64 encoding everywhere
50mb prior to encoding
Re: 1 & 2
Can you relax this so that base64 encoding is not necessary for loading/creating binary CoValues.
In other words, base64 encoding will only used for delivering subscription events over a WebSocket or SSE?
It's much easier to split a 50MB binary file, as is, and stream it in 100KB chunks in either direction (server->client and client->server) than to do so with base64 encoding added to the mix.