Using vegeta in a community driven benchmarking project
Hi,
I'm leading a benchmarking project https://github.com/the-benchmarker/web-frameworks.
We are currently using wrk, but it seems not accurate to our goal.
I've starred in a thread to discuss changing our toolset, https://github.com/the-benchmarker/web-frameworks/discussions/8088.
Any insights of any community members here is very appreciated :heart:
We actually use wrk (without rate limit) and I have some results that I can not understand.
Testing this line https://github.com/the-benchmarker/web-frameworks/blob/aedc5b0a39a18840b7818906f389f6277b66cbd1/rust/actix/src/main.rs#L12 with wrk, wrk http://172.17.0.6:3000
I have
Running 10s test @ http://172.17.0.6:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 35.73us 73.57us 2.77ms 97.49%
Req/Sec 167.36k 11.09k 180.08k 91.58%
3364219 requests in 10.10s, 240.63MB read
Requests/sec: 333098.97
Transfer/sec: 23.83MB
and with vegeta, disabling rate limit (echo "GET http://172.17.0.6:3000" | ~/bin/vegeta attack -duration 10s -max-workers 2 -rate 0 -max-connections 10 | tee results.bin | vegeta report -type=json | python -m json.tool -) , I have
{
"latencies": {
"total": 19222679102,
"mean": 80362,
"50th": 80846,
"90th": 89392,
"95th": 99321,
"99th": 160442,
"max": 7349046,
"min": 14917
},
"bytes_in": {
"total": 0,
"mean": 0
},
"bytes_out": {
"total": 0,
"mean": 0
},
"earliest": "2025-01-03T09:26:06.784554552+01:00",
"latest": "2025-01-03T09:26:16.78456101+01:00",
"end": "2025-01-03T09:26:16.784617968+01:00",
"duration": 10000006458,
"wait": 56958,
"requests": 239201,
"rate": 23920.084552409397,
"throughput": 23919.948309255804,
"success": 1,
"status_codes": {
"200": 239201
},
"errors": []
}
and with an other rust based https://github.com/the-benchmarker/web-frameworks/blob/aedc5b0a39a18840b7818906f389f6277b66cbd1/rust/axum/src/main.rs#L8
- vegeta
{
"latencies": {
"total": 29042425944,
"mean": 82294,
"50th": 81431,
"90th": 89874,
"95th": 97185,
"99th": 169418,
"max": 12406035,
"min": 13542
},
"bytes_in": {
"total": 0,
"mean": 0
},
"bytes_out": {
"total": 0,
"mean": 0
},
"earliest": "2025-01-03T09:52:19.095929094+01:00",
"latest": "2025-01-03T09:52:34.095824521+01:00",
"end": "2025-01-03T09:52:34.095885729+01:00",
"duration": 14999895427,
"wait": 61208,
"requests": 352907,
"rate": 23527.297354671085,
"throughput": 23527.20135047244,
"success": 1,
"status_codes": {
"200": 352907
},
"errors": []
}
- wrk
Running 10s test @ http://172.17.0.7:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 154.99us 322.14us 14.18ms 99.25%
Req/Sec 36.47k 2.83k 65.80k 93.03%
729313 requests in 10.10s, 52.16MB read
Requests/sec: 72209.68
Transfer/sec: 5.16MB
and with a nim based framework =>https://github.com/the-benchmarker/web-frameworks/blob/aedc5b0a39a18840b7818906f389f6277b66cbd1/nim/prologue/server.nim#L3
- vegeta
{
"latencies": {
"total": 29113548225,
"mean": 94888,
"50th": 87280,
"90th": 109852,
"95th": 130171,
"99th": 259565,
"max": 4068316,
"min": 21292
},
"bytes_in": {
"total": 0,
"mean": 0
},
"bytes_out": {
"total": 0,
"mean": 0
},
"earliest": "2025-01-03T10:15:11.805718673+01:00",
"latest": "2025-01-03T10:15:26.805640684+01:00",
"end": "2025-01-03T10:15:26.80573285+01:00",
"duration": 14999922011,
"wait": 92166,
"requests": 306818,
"rate": 20454.639682459612,
"throughput": 20454.514001090334,
"success": 1,
"status_codes": {
"200": 306818
},
"errors": []
}
- wrk
Running 10s test @ http://172.17.0.10:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 49.15us 131.55us 7.76ms 98.40%
Req/Sec 114.19k 11.45k 207.48k 93.03%
2283029 requests in 10.10s, 285.22MB read
Requests/sec: 226054.00
Transfer/sec: 28.24MB
Any idea on how to explain those figures (I mean the difference between those tools) ?
cc @appleboy @tsenart