Results 307 comments of Marwan Rabbâa

@picatz I don't have that in https://github.com/waghanza/http-benchmark/blob/use_falcon/ruby/sinatra/Dockerfile

@aemadrid Thanks for reminder, I've fix that on https://github.com/the-benchmarker/web-frameworks/pull/481

I have now (env : 4 CPU x 8 Go) + `wrk http://0.0.0.0:3000` ~~~ falcon =============================== Running 10s test @ http://0.0.0.0:3000 2 threads and 10 connections Thread Stats Avg Stdev...

@ioquatix Sure, `puma` is more mature. That's why I will no merge to **master** :stuck_out_tongue_closed_eyes: In our **benchmark**, we use `raw` `puma` (no `nginx`) -> keep track of `falcon` on...

You are right average latency for `starlette` should be about 1.26ms instead of https://web-frameworks-benchmark.netlify.app/result?asc=0&f=starlette&metric=averageLatency&order_by=level64 ``` Running 15s test @ http://172.17.0.2:3000/" + 8 threads and 64 connections\n" + Thread Stats Avg...

I thought about using a self hosted postgresql (on digital ocean) and a graphql engine (hasura) on digital ocean. What do you think ? I'm able to push results if...

> You mean use postgres to store the benchmark result? Yes > And I think using graphql is too overkill for this use case Ok probably using the [postgrest(https://postgrest.org/) should...

Personally, I do not have one. I'll be glad if someone gives some time to this UI

@tomchristie It blocks `responder` (since `apistar` is on of it dependencies) to use the latest `typesystem` version

@cctse `japronto` seems to have more `performance`, but is it accurate to add this into **benchmarks** (~2.5) @frnkvieira What do you think ?