FrameworkBenchmarks
FrameworkBenchmarks copied to clipboard
Changes / Updates / Dates for Round 20
We will be locking PRs for Round 20 on 11/11
I'll be editing this pinned issue to link to other issues or pull requests that may affect tests for this round.
Changes
- [ ] #5648 We will be enforcing the size of route names for tests for Round 20. (i.e. the plaintext route must be at least the same length as
/plaintextand cannot be/p.
Request for Comments
- #5673 Server Response Header Change
- #5844 Out-of-process Caches
Previous Round Information
See: https://github.com/TechEmpower/FrameworkBenchmarks/issues/4966 for details about the previous round released on 5/28/2020
** Placeholder **
Round 20 will be adding visualization of the cached queries test type. In fact, it's very likely that we'll push this visualization out soon since Round 19 already has some results, and doing so will make these render for continuous runs.
If your framework doesn't yet include an implementation of this new test type, please consider adding one!

The visualization of the cached queries results is a great addition, especially given that the test has been defined (and the first implementations have appeared) 3 years ago. I think the lesson here is that there won't be a significant number of implementations of a future test type unless the results are being shown properly on the results dashboard.
There is just one small issue right now - while a partial set of cached queries results (i.e. for a run that hasn't completed yet) is shown properly, the visualization doesn't work for runs that have completed - I just get "No data available for this test".
As for the changes for the next round, are there any plans to upgrade to Ubuntu 20.04? Updating the kernel version has had a significant performance impact in the past.
There is just one small issue right now - while a partial set of cached queries results (i.e. for a run that hasn't completed yet) is shown properly, the visualization doesn't work for runs that have completed - I just get "No data available for this test".
I think this is a side effect of renaming the test type from cached_query to cached-query. Runs that started after this change in the toolset will have visualizable data, runs from before that will not. The TFB website only understands the new name. For Round 19, the new name was manually edited into the website's copy of the results.
As for the changes for the next round, are there any plans to upgrade to Ubuntu 20.04? Updating the kernel version has had a significant performance impact in the past.
I don't have a strong opinion on this. Looking at the ubuntu release schedule, 20.04 is still in "Hardware and Maintenance" mode until 2022 and as far as I understand it, the underlying kernel would be the same version, right? We could probably do this will little effort if it made sense to do.
... the underlying kernel would be the same version, right?
By default Ubuntu Server 18.04 uses kernel version 4.15 (there's also the HWE stack; I am not sure how it works, but it might be an option) and 20.04 gets 5.4, so definitely not. I think it is reasonable to stick with LTS releases, so upgrading now means that this question won't come up for another 2 years. Obviously I can't say for certain what kind of impact a newer kernel would have on the benchmarks - it might even cause regressions. However, I have the feeling that generally speaking the negative effects of the security countermeasures tend to be ameliorated in newer versions and there's always optimization work in the network stack, for example.
Hello everyone!
Some Round 20 news:
We will be locking PRs for Round 20 on 11/11 so that we can kick off official round runs by that weekend. Please have your PRs in before then if you'd like to be sure that they make it into the official rounds.
About two weeks after Round 20 is posted, we will be moving to the new Toolset. Nothing will be required from framework maintainers for their existing frameworks to use the new toolset. We have a tool that will automagically replace the current benchmark configs with the new config. There will be plenty of questions during this transition phase, and we'll be here to help. See #6064 for more information about this.
Around this time, we will be taking down Citrine to do some kernel updates and get the new toolset working properly. Exciting times ahead!
Hey everyone! Quick update here. We're still working on kicking off the Azure run, since it's been some time since we've noticed an automated run going. We'll have to figure out what's going on there and in the meantime, you have a few extra days to get some PR's in. Let's go with locking PR's by Thursday 11/19 so we can try to capture round 20 over next weekend.
Add a message to tfb-status with some advance notice before PR submissions are closed for the round (From Round 19)
@joanhey you're absolutely right. I forgot to do this. Hopefully nobody is in a last minute rush to get things in and they've at least looked at this issue in the last 20 days when I first posted that comment. I'll put up a notice now. We're planning on kicking off the run on Tuesday. Had some trouble with Azure and mongo.
Due to some technical difficulties and a busy client schedule plus the holidays, we're going to postpone Round 20 until after the holiday break. I'll change the notice on TFB-Status. Let's set the last day of PRs 12/28.
Without CI right now, I have to go through these PRs and test locally so please get your PRs in early and make sure to test locally before opening. Thanks!
@nbrady-techempower https://tfb-status.techempower.com/share is down so hard to visualise local runs at moment. I've confirmed #6245 results.json manually, but would be good to have link back up to post links in PRs showing local runs successful.
@sagenschneider We'd like to have it back up, too, but there's a light rain in Southern California which means we're having electrical troubles.
Confirmed local verification and posted https://github.com/TechEmpower/FrameworkBenchmarks/pull/6236#issuecomment-751897551;
For others the ./tfb -m verify --test xx etc tools still work locally to test (drop the -m verify to measure the performance)
Official runs have started for both our environments on https://tfb-status.techempower.com
If it's your thing, please make an offering to Euros so that a light eastward breeze doesn't knock our power out.
so that a light eastward breeze doesn't knock our power out.
Alas, the odds were not in our favour

Classic 2020. It may just be tfb-status that is down. The azure run should still be going fine and will check on citrine. Will post here when I have more info.
All the people was crazy for 2000. All will fail. And the problem was in 2020. Live is live !!!! Sorry Round 20 :)
504 gateway time-out :thinking: nginx 1.16.1 Stable version 1.18.0 Mainline v1.19.6 In the most prestigious benchmark Happy new year !!!!
@joanhey The good news is the benchmarks are still running, it's just our reporting endpoints that are down. We won't have anyone in the office until Monday but we should have complete runs posted by then for all to review. Thank you and everyone else here for all your contributions and help this year. It really is appreciated by all of us at TechEmpower!
thanks @nbrady-techempower @bhauer and all at techempower. merry new year!
While we continue to work on some stability issues with tfb-status, here are the Round 20 logs for review. https://tfb-logs.techempower.com/Round%2020/
And the results? I am waiting for the results and I cannot check if an Atomic reference without relaxed inputs really creates a bottleneck in a multi thread environment with more than 2 threads.
The server died or what, someone hacked everything? Wasn't it going to be available on Monday?
Thank you very much for your work.
A temporal solution would be to use the results sharing.
But there is a conflict with the CORS policy in tfb-logs.
https://www.techempower.com/benchmarks/#section=test&resultsurl=https%3A%2F%2Ftfb-logs.techempower.com%2FRound%252020%2Fcitrine-results%2F20201229183947%2Fresults.json

Uploaded the results files to github and created 2 results sharing, like a temporal solution.
Round 20
Citrine:
https://www.techempower.com/benchmarks/#section=test&resultsurl=https%3A%2F%2Fraw.githubusercontent.com%2Fjoanhey%2FFrameworkBenchmarks%2Fresults%2Fresults.json
Azure:
https://www.techempower.com/benchmarks/#section=test&resultsurl=https%3A%2F%2Fraw.githubusercontent.com%2Fjoanhey%2FFrameworkBenchmarks%2Fresults%2Fazure-results.json
Enjoy it !!!
@joanhey The truth is that I have enjoyed seeing that the first 3 implementations of fortunes have part of my code.
PHP has little left, after all, it was a script to upload a CV that was oversized.
While we continue to work on some stability issues with tfb-status, here are the Round 20 logs for review. https://tfb-logs.techempower.com/Round%2020/
Thanks, I had forgotten that the logs include system metrics from dstat. That's actually really helpful for being able to actually tell how much of a performance impact a change has for any of the top JSON or Plaintext results.
As many people already know, for almost anything in the top 20, the the client machine is the bottleneck for JSON tests and the network is the bottleneck for Plaintext tests, so results will always be around 1.6M/7M respectively and the ranking is more or less random. However by taking a look at the stats you can get a feel for the actual impact of your changes.
As an example, I know from my own testing that the platform implementation (libreactor) is faster than the micro-framework implementation (libreactor-server), however in the Round 20 run libreactor-server is in 3rd place and libreactor is back in 7th place. On the other hand, when I take a look at the stats, and scroll down to the last instance of "total cpu usage" I can see that libreactor was only consuming about 50% of the available CPU for the 512 connection test, where as libreactor-server uses around 70% of the available CPU. This is more in line with my expectations of the difference between the two.
libreactor
"1609488255.116": {
"total cpu usage": {
"sys": 31.174,
"stl": 0.0,
"idl": 52.4,
"usr": 16.426,
"wai": 0.0
},
libreactor-server
"1609488628.826": {
"total cpu usage": {
"sys": 42.686,
"stl": 0.0,
"idl": 28.43,
"usr": 28.883,
"wai": 0.0
},
I know there is discussion about using the DB server as an additional source of client load in the future, and that should help address the issue for the JSON tests. In the meantime this is probably our best workaround for roughly assessing performance.
While we continue to work on some stability issues with tfb-status, here are the Round 20 logs for review. https://tfb-logs.techempower.com/Round%2020/
Thanks, I had forgotten that the logs include system metrics from dstat. That's actually really helpful for being able to actually tell how much of a performance impact a change has for any of the top JSON or Plaintext results.
As many people already know, for almost anything in the top 20, the the client machine is the bottleneck for JSON tests and the network is the bottleneck for Plaintext tests, so results will always be around 1.6M/7M respectively and the ranking is more or less random. However by taking a look at the stats you can get a feel for the actual impact of your changes.
As an example, I know from my own testing that the platform implementation (libreactor) is faster than the micro-framework implementation (libreactor-server), however in the Round 20 run
libreactor-serveris in 3rd place andlibreactoris back in 7th place. On the other hand, when I take a look at the stats, and scroll down to the last instance of "total cpu usage" I can see thatlibreactorwas only consuming about 50% of the available CPU for the 512 connection test, where aslibreactor-serveruses around 70% of the available CPU. This is more in line with my expectations of the difference between the two.libreactor
"1609488255.116": { "total cpu usage": { "sys": 31.174, "stl": 0.0, "idl": 52.4, "usr": 16.426, "wai": 0.0 },libreactor-server
"1609488628.826": { "total cpu usage": { "sys": 42.686, "stl": 0.0, "idl": 28.43, "usr": 28.883, "wai": 0.0 },I know there is discussion about using the DB server as an additional source of client load in the future, and that should help address the issue for the JSON tests. In the meantime this is probably our best workaround for roughly assessing performance.
yes. there is a nice website here which lets you see them. very useful: https://ajdust.github.io/tfbvis/. you can clone it and update the test results yourself locally if you don't want to wait for the author to update the website.
@billywhizz oh wow, didn't even know that existed, thanks! Way easier than manually reviewing the JSON lol
@billywhizz Very cool! I'll create a page in the wiki for unofficial visualizations to keep track of things like this. thanks!
Review this url: https://ajdust.github.io/tfbvis/
With some time will be the results from Round 20 to compare. Happy new year !!
We need another URL to the results history: An example: https://tfb-status.techempower.com/timeline/php/plaintext When it's working tfb-status again.
@nbrady-techempower thanks for merging my PR, do we still have time to test new configurations? I just want to make sure I have enough time on the event of a regression.
Hi @dantti -- Do you mean for Round 20? We captured those results a couple of weeks ago and are just putting together some things to get blog post and the results posted. See https://github.com/TechEmpower/FrameworkBenchmarks/issues/5709#issuecomment-754576921 for some visualizations.
@nbrady-techempower oh, so my regressions weren't fixed in time :(
Sorry :( We are working hard on getting the new toolset in and improving tfb-status to help get rounds out faster. Obviously this last year was a challenge, to say the least. We hope to get the next one out much quicker!
ok, so time for more PRs :)
I'm finally going to get Round 20 posted next week!
@bhauer @nbrady-techempower congrats on getting round 20 out the door. Any chance you will be add the Round 20 release on GitHub any time soon? I am writing a blog post and companion Cloudformation template based on round 20 and I like to be able to use the "official" release.
@talawahtech https://github.com/TechEmpower/FrameworkBenchmarks/releases/tag/R20
@nbrady-techempower awesome, thanks.
But I do not understand, all the previous times better and the owner has removed these benchmarks. Why do you add it to this round? Don't you like me? I can understand that you do not like me, but this, good for science!
@botika Please contact the framework maintainer. We do not make those decisions.
https://github.com/TechEmpower/FrameworkBenchmarks/pull/6323/commits/8d35467fc6b26dfe605fcc9ffc37aeb3e46ef843 He deletes it 13 days ago according to the commit. When does the 21 come out? This season 20 gives me that it will not be very good :smile_cat: