The benchmark result from 2025-02-10 is definitely incorrect
The first inaccuracy, which is immediately noticeable, is the abnormally strong difference between benchmarks of the same frameworks, but on node.js and bun.js. For example, ditsmod on node.js shows 3.5 K per second, and on bun.js - 99 K. In fact, ditsmod v3 on bun.js works 1.5 - 2 times faster, but definitely not 30 times faster.
Even if we take the results for node.js alone, the benchmarks are still clearly incorrect. For example, nestjs-express shows even better results than bare express, which is impossible if both these frameworks are running on the same version of node.js. Correct results should show that express is always 15-20% faster than nestjs-express.
In addition, express should show more than half the performance of ditsmod or fastify.
Could you check the implementation then ?
Maybe a cluster thing then
Many frameworks changed significantly For example in Go - only one showing numbers close to previous results Every other solution dropped to around 4k-10k from 100k-500k Same for C++ frameworks - significant drop
@waghanza, I can't tell you what's wrong with the workflow, but something is definitely not going as it should.
This is easy to check:
cd javascript/ditsmod
npm install
npm run build
NODE_APP=dist/main.js node cluster.mjs
From second terminal run 3-5 times this command:
wrk -H 'Connection: close' -d 5s -c 8 --timeout 8 -t 4 http://0.0.0.0:3000
The outputs:
Running 5s test @ http://0.0.0.0:3000
4 threads and 8 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 0.94ms 386.05us 5.53ms 73.05%
Req/Sec 1.87k 108.61 2.05k 80.50%
37315 requests in 5.00s, 3.35MB read
Requests/sec: 7457.23
Transfer/sec: 684.55KB
Stop ditsmod in first terminal and then run this commands:
cd ../ditsmod-bun
bun install
bun run build
bun run start
From second terminal run 3-5 times this command:
wrk -H 'Connection: close' -d 5s -c 8 --timeout 8 -t 4 http://0.0.0.0:3000
The last command outputs:
Running 5s test @ http://0.0.0.0:3000
4 threads and 8 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 414.17us 473.23us 10.70ms 91.65%
Req/Sec 3.70k 550.42 6.43k 93.07%
74308 requests in 5.10s, 5.31MB read
Socket errors: connect 0, read 74302, write 0, timeout 0
Requests/sec: 14568.88
Transfer/sec: 1.04MB
So, ditsmod runs x2 as fast on Bun.js as it does on Node.js, but definitely not x30 as shown in the benchmark results from 2025-02-10.
Could you make the same for all tree endpoints?
Maybe results are totally wrong for one of them
It doesn't make sense for everyone to do this, but for example for Koa:
- Node.js - 7190 (3254 in the last benchmarks on the website)
- Bun.js - 12219 (93368 in the last benchmarks on the website)
The numbers are definitely wrong. Take a look at Go and compare it with the 2024-12 results.
I have no clue why my own library is the only one showing the same results.
I'm just guessing, but maybe the machine got occupied with other tasks during the benchmark and a re-run could fix it?
No, the current results should be different from 2024-12 because the HTTP keep alive feature is not currently used. But still, the benchmarks are definitely incorrect.
The whole command is
wrk -H 'Connection: close' --connections 64 --threads 8 --duration 15 --timeout 1 --script /home/waghanza/web-frameworks/pipeline.lua http://172.17.0.2:3000/
@KostyaTretyak
The full log is here =>https://gist.github.com/waghanza/92366d18948972463a918f3e3ece2be7
Seems that there is a lot of socket errors in favor of bun
@waghanza, regarding bun errors, I don't know if the benchmark results can be trusted. Other frameworks also throw these errors when running on bun.
But these logs do not contain logs from the framework. Why? Are they specifically disabled?
These logs should look like:
info [AppModule]: PreRouterExtension: setted route GET "/".
So are we sure that we do not have server implementations that just ignore the connection close instruction as it is not common anymore? 🤔
To be honest it now feels more like a TCP handshake benchmark than a web server benchmark (or maybe more a HTTP/1.0 one). As long as I understood the new changes correctly.
server implementations that just ignore the connection close instruction as it is not common anymore
The server receives not just the instruction, but also the HTTP version. If it does not follow the instruction to close the connection, then in the same way it can ignore any instructions at its discretion.
Can compliance with these instructions be easily verified?
To be honest it now feels more like a TCP handshake benchmark than a web server benchmark
Single Page Applications (which are very popular now) often use two web servers: one for static files, one for dynamic requests. Benchmarks are usually run for frameworks with dynamic requests, and probably in 90% of cases, such a feature as "keep alive" is not used (in particular because of use a single GraphQL query).
So, in the real world, dynamic queries are handled by closing the connection after a few requests (usually 1-3 requests). In my opinion, in real-world conditions, dynamic queries never get thousands of requests on a single connection, as has been the case in benchmarks so far.
Can compliance with these instructions be easily verified?
You could do curl -v http://server http://server with the Connection: close header which issues two requests and states whether the connection has been re-used or not.
If you look at the results for Java it is pretty obvious that the first two frameworks are not HTTP compliant. See RFC 9112 section 9.6.
In my opinion, in real-world conditions, dynamic queries never get thousands of requests on a single connection, as has been the case in benchmarks so far.
I agree, both ends of this range feel kind of synthetic.
Many of these things were described just in previous Issue: https://github.com/the-benchmarker/web-frameworks/issues/8116
Andrea even prepared a simple validation toolkit And yes - many solutions from this repo don’t give a %%%% about http standard :)
But why would somebody send "Connection: close" and immediately follow up with another request on the same connection?
I believe that's what's happening here in the benchmarks and this combination doesn't seem to make sense.
The RFC also states this:
Connection: close as a request header field indicates that this is the **last** request
that the client will send on this connection
A client that sends a "close" connection option MUST NOT send further requests on that connection
^ The benchmark seems to violate this rule.
The benchmark seems to violate this rule
These rules should work for real-world applications. But in the real world, no web server receives thousands of requests per second on a single connection. Closing the connection after each request brings the benchmarks much closer to real-world usage than allowing thousands of requests to be made on a single connection.
Closing the connection after each request brings the benchmarks much closer to real-world usage than allowing thousands of requests to be made on a single connection.
Then the benchmark should actually discard the connection and not re-use it as it obviously happens.
The benchmark seems to violate this rule
These rules should work for real-world applications. But in the real world, no web server receives thousands of requests per second on a single connection. Closing the connection after each request brings the benchmarks much closer to real-world usage than allowing thousands of requests to be made on a single connection.
I think you misunderstood what I was trying to say. I'm saying the client doesn't follow the RFC here and needs to be fixed.
@akyoto, why are you sure that the client is trying to send requests to an already closed connection? Why can't subsequent requests open a new connection?
@akyoto, why are you sure that the client is trying to send requests to an already closed connection?
It's based on the numbers I see. My reverse-proxy targeted web server ignores "Connection: close" and is the only Go library that keeps getting the same results as in 2024-12 (I might implement in the future, or not, but that's off-topic).
So, logically speaking, the client must be sending the same number of requests as in 2024.
But it shouldn't, because it sent a "Connection: close".
The client has an obligation to stop sending requests after that.
My reverse-proxy targeted web server ignores "Connection: close"
This is a problem on your web server's side. It's clearly not working according to the HTTP rules you're urging us to follow.
So, logically speaking, the client must be sending the same number of requests as in 2024.
You gave an example of how your web server works and concluded that since your web server ignores this instruction, then there is no need to send these instructions. But, in fact, this is not logical at all.
My reverse-proxy targeted web server ignores "Connection: close"
This is a problem on your web server's side. It's clearly not working according to the HTTP rules you're urging us to follow.
Whether that's a problem or not is irrelevant and off-topic here. I could go on explaining how it's not a problem in certain use-cases, but again, I'll spare you the off-topic conversations. Let's focus on the actual problem.
So, logically speaking, the client must be sending the same number of requests as in 2024.
You gave an example of how your web server works and concluded that since your web server ignores this instruction, then there is no need to send these instructions. But, in fact, this is not logical at all.
No @KostyaTretyak that's not what I'm saying at all.
It's not necessary to stop sending "Connection: close".
You have 2 solutions here:
a) Send "Connection: close" and stop sending requests after that. b) Not send "Connection: close" and keep the old behavior.
You probably misunderstood my post as a call to use b) as a solution, but that's not my point at all. You can use a) or b), I don't care. Either one is fine. But a solution has to be decided on.
Send "Connection: close" and stop sending requests after that
But why? Above you explained that you want the client to not send requests to an already closed connection. But you still didn't answer my question:
@akyoto, why are you sure that the client is trying to send requests to an already closed connection? Why can't subsequent requests open a new connection?
Well, because you want to test web server frameworks and not bananas, I suppose?
A client that sends a "close" connection option MUST NOT send further requests on that connection (after the one containing the "close") and MUST close the connection after reading the final response message corresponding to this request.
The golden rule of network communications is to never trust the other side. It doesn't matter what the server does - the client can't trust the server and it needs to stop sending requests. This is not only required by the RFC but it also makes sense from a logical standpoint.
Either way, I don't think people understand that I'm arguing against my own benefit here.
I mean, if you want to keep sending requests on a connection with this header on it, be my guest. It'll make the numbers of servers with non-compliant implementations look much better than they are.
So, do you want to improve the accuracy of the benchmark, or not?
Because I can guarantee you, there will be many more servers ignoring the header. Complaining about oranges being orange won't change anything. Do you want more accurate numbers or not?
Thanks for you insights, and glad there is a debate here 🎉
Not sure to understand
Then the benchmark should actually discard the connection and not re-use it as it obviously happens.
@Kaliumhexacyanoferrat
The whole idea of this project is to test framework in real world scanario :
- we have disabled keep alive, which mess the results facing the above goal
- the scario will change with database loading, serialization ....
- the benchmark will not run indefinitely on my workstation, a real infrastructure will be used
- we definitely needs to check some compliance here @trikko spotted some frameworks that are not htp compliant like others
@waghanza The problem is that we are talking about 2 different things here.
You talk about the testing methodology. Nobody here has a problem with your testing methodology. Sure, we could go into the rabbit hole of discussing what's more "real", but from my understanding of the people posting here nobody criticizes this decision.
The problem is the testing implementation itself which is incorrect as your client does not adhere to common principles like stopping to send requests after it tells the server to close the connection. It is mentioned explicitly in the specs that the client should not do that and because of this buggy implementation you get a lot of incorrect outcomes. Even if we disregard the specs for a second and just think about this, why would you tell everybody that the highway is going to be closed and then attempt to drive over said highway? You should not drive over the highway if it's going to be closed. The correct thing to do here is to find a new one.
methodology != implementation
Understood. I do not take it as critism but a way to improe the whole project
As I understand, the tool (wrk) send http request after closing the connection ?
How can it send data on a TCP connection if TCP connection is closed? Has anyone an example to debug?
As I understand, the tool (wrk) send http request after closing the connection ?
It doesn't close it after reading the final response message, and that's precisely the problem.
A client that sends a "close" connection option MUST NOT send further requests on that connection (after the one containing the "close") and MUST close the connection after reading the final response message corresponding to this request.
This thing could be a little misleading: if server responds with connection:close server itself should close connection. Server in this case is not required to send content-lenght, so client can't close the request itself since it probably can't even understand if the request is completed, and have to wait for close by server AFAIK!
Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.
That's a backward compatibility with http/1.0 where connection keep-alive didn't exist and connection was clos for each request