zio-http
zio-http copied to clipboard
Enable benchmark monitoring with regression CI hook
We need JMH-based benchmarks to be run as part of CI, with automatic failure if performance on some benchmark falls below some threshold set in configuration.
/bounty $750
π $750 bounty β’ ZIO
Steps to solve:
-
Start working: Comment
/attempt #2265
with your implementation plan -
Submit work: Create a pull request including
/claim #2265
in the PR body to claim the bounty - Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts
Additional opportunities:
-
π΄ Livestream on Algora TV while solving this bounty & earn $200 upon merge! Comment
/livestream
once live
Thank you for contributing to zio/zio-http!
Add a bounty β’ Share on socials
Attempt | Started (GMT+0) | Solution |
---|---|---|
π΄ @kitlangton | Aug 3, 2023, 11:28:32 AM | WIP |
π΄ @uzmi1 | Oct 28, 2023, 2:50:36 PM | WIP |
π΄ @nermalcat69 | Dec 6, 2023, 5:49:54 PM | WIP |
π’ @alankritdabral | Mar 24, 2024, 2:12:38 PM | WIP |
I've been making some good progress on this in a separate repo. /attempt #2265
I'm going to make a GitHub action that will parse the JMH output and compare its performance agains past run data (serialized and stored in a separate branch). If the benchmarks fall beneath the configured threshold, it will fail CI. I'm also going to try to have it post the benchmark results as a comment on the Pull Request.
This action can be a separate zio
org project if it proves useful. It should be generic enough to attach to any zio
project. (Also, the action itself is written with ZIO and Scala.jsβso that's fun!)
Options
Making some progress:
Let me know if you have any design thoughts/questions :)
Thank you for your serious attitude towards zio-http performance! It is already one from the most fastests contemporary Scala web-servers, just see results here and here.
Here is a couple of ideas for JMH benchmarking:
- Use
gc
andperfasm
JMH profilers to store allocation rates as auxilary metrics and disassembled code generated by JIT for hottest places together with throughput results. - Use JMH visualizer to easily see main and auxilary metrics together with their confidence range and compare them interactively using references to raw
.json
files on GitHub, like here
Also, my 2 cents for HTTP-server benchmarking:
- Measure latency for different combinations of fixed throughput rates and numbers opened of connections using wrk2. Here is a great talk of @giltene about understanding latency and measuring application responsiveness.
- Use async-profiler during benchmarking to see what is happening under the hood. It's shows almost all what is happening under the hood: JVM, C++, kernel stack frames, virtual and interface calls (
vtable
anditable
). If results are stored to.jfr
format then they could be converted to Netflix's flamescope format and then be browsed interactively with 10ms granularity to observe different mode of server working (warming up, GC-ing, etc.).
UPDATE: I've created the following JMH Benchmark Action repository.
One bigger concern is that these benchmarks take a good deal of time to runβeven on a relatively powerful M2 Mac. Doing this in CI, even configured with fewer iterations/forks, will take a good deal of time. One option would be to only run this action if there's a benchmark
label added to the PR? Then, the maintainers can opt-in to benchmarks when it seems relevant to the work being done. (This is something that can be done with a few lines in the workflow yaml, so it doesn't have to be part of the main action)
UPDATE: And, as usual, the complexity unfurls itself as you approach the end. It turns out it's not as simple to merely "comment on a pull request" as it first appeared (more info here: https://github.com/zio/zio-http/pull/2369). But I have spotted a workaround.
Another thought from that PR: There's a lot of variance in certain very high ops/s
benchmarks, so I should probably take the standard deviation into account when attempting to identify a regression
, instead of just naively comparing the final scores
.
Alrighty. A summary of open design questions:
- How to report the results:
-
Comment on PR: The naive way of commenting on a PR from within a workflow is rife with difficulty stemming from security issues. (https://securitylab.github.com/research/github-actions-preventing-pwn-requests/). After doing some research, it seems the safest way of achieving the PR comment summary would be to use 2 workflows, one for running benchmarks and another for commenting (this approach is described in the linked article). Example:
-
Job Summary: Alternatively, would posting a job summary be a simpler and more efficient solution, avoiding the need for two separate workflows and the use of artifacts? Example:
-
Comment on PR: The naive way of commenting on a PR from within a workflow is rife with difficulty stemming from security issues. (https://securitylab.github.com/research/github-actions-preventing-pwn-requests/). After doing some research, it seems the safest way of achieving the PR comment summary would be to use 2 workflows, one for running benchmarks and another for commenting (this approach is described in the linked article). Example:
- Determining what counts as a
regression
- There's a lot of variance in certain very high ops/s benchmarks, so I should probably take the standard deviation into account when attempting to identify a regression, instead of just naively comparing the final scores.
- When should we run the benchmark workflow?
- Running every benchmark on every PR commit, even with a modest number of iterations, will chew through CI hours. Perhaps we could opt in to running benchmarks by having the workflow check for a
Benchmark
label. Another option, would be to explicitly run the benchmarks by having it watch for a comment like "/benchmark" or something, posted by a maintainer.
- Running every benchmark on every PR commit, even with a modest number of iterations, will chew through CI hours. Perhaps we could opt in to running benchmarks by having the workflow check for a
@kitlangton are you still on this or can I make an attempt?
/claim #2502
Hi Jdegoes- check solution- Bug Description: The current implementation lacks benchmark monitoring, and there is no CI hook for regression testing. This creates a gap in performance monitoring, potentially leading to undetected regressions and performance issues. The absence of benchmark monitoring makes it challenging to identify changes that negatively impact system performance.
Impact:
Undetected Performance Regressions:
Without benchmark monitoring, performance regressions may go unnoticed, leading to degraded system performance. Missing Continuous Integration (CI) Hook:
Lack of a CI hook for regression testing means changes in the codebase may not undergo performance testing during the CI/CD pipeline. Steps to Reproduce:
Inspect Current Monitoring Setup:
Observe the absence of benchmark monitoring in the current system. Verify that there is no CI hook for regression testing related to performance. Attempt to Enable Benchmark Monitoring:
Explore the system configuration or relevant scripts to enable benchmark monitoring. Check for existing CI hooks related to performance. Verify Implementation:
Execute benchmark monitoring after attempting to enable it. Check if the CI hook triggers regression testing for performance-related changes. Expected Behaviour: 1.Benchmark Monitoring Enabled:
After the task is completed, benchmark monitoring should be active, capturing relevant performance metrics. CI Hook for Regression Testing: A CI hook should be in place to trigger regression testing for performance-related changes in the codebase. Suggested Solution:
Benchmark Monitoring:
Integrate a suitable benchmark monitoring tool or solution into the system configuration. Configure the monitoring tool to capture relevant performance metrics. CI Hook for Regression Testing:
Implement a CI hook that triggers regression testing for performance-related changes. Integrate the CI hook into the existing CI/CD pipeline. Code Implementation Example:
Example CI/CD Configuration (GitLab CI)
stages:
- test
benchmark: stage: test script: - ./run_benchmarks.sh Recommendation:
Ensure the selected benchmark monitoring tool aligns with system requirements. Regularly review and update the benchmark metrics being monitored to reflect evolving performance expectations. Reported by: Uzma Qureshi
Proof of Concept simple proof of concept (PoC) to enable benchmark monitoring. Note that this is a generic example, and you may need to customize it based on your specific environment and the benchmark monitoring tool you choose.
Assuming you are using a Unix-like system and want to integrate Apache Benchmark (ab) for benchmarking, here's a basic script:
run_benchmarks.sh:
#!/bin/bash
Set variables
TARGET_URL="http://your-api-endpoint.com/" BENCHMARK_RESULTS_FILE="benchmark_results.txt"
Run Apache Benchmark (ab)
ab -n 100 -c 10 $TARGET_URL > $BENCHMARK_RESULTS_FILE
Print benchmark results
cat $BENCHMARK_RESULTS_FILE
This script does the following:
It sends 100 requests (-n 100) with a concurrency of 10 (-c 10) to the specified API endpoint ($TARGET_URL). The benchmark results are saved in a file named benchmark_results.txt. The script then prints the benchmark results to the console. Remember to replace "http://your-api-endpoint.com/" with the actual URL you want to benchmark.
@uzmi1: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then π
/claim #2265
The bounty is up for grabs! Everyone is welcome to /attempt #2265
π
@nermalcat69: Reminder that in 7 days the bounty will become up for grabs, so please submit a pull request before then π
The bounty is up for grabs! Everyone is welcome to /attempt #2265
π
After digging little bit i found few flaws in the build :
- The path to benchmark files is outdated which results in no jmh bechmarks in ci.yml .
- The jdk version 8 in ci.yml results error during jmh run.
- The UtilBenchmark runs in
avgt
mode unlike others so using grep "thrpt" will throw error.
/attempt 2265
Algora profile | Completed bounties | Tech | Active attempts | Options |
---|---|---|---|---|
@alankritdabral | 2 bounties from 2 projects | Cancel attempt |
Currently, we're running each benchmark in parallel for both the current branch and the base branch, which doubles the time required. The approach I'm considering is to run the base benchmark with each push to the main branch and save its artifact. During a pull request run, we'll execute the benchmarks for the current branch, download the base artifacts, compare the current benchmarks with the base benchmarks using a shell script, and upload the results simultaneously. If the benchmarks exceed a certain threshold, we will break the CI.
I will divide the task into two PRs.
- [x] Firstly, run benchmarks on push to the main branch only and save them as cache. #2750
- [ ] Secondly, run benchmarks on each pull request and compare its results with the base benchmarks and show the difference. #2751
@jdegoes I have created a pull request for the current issue pull_request. Hope you find this pr useful. :smile: :
hey @jdegoes #2751 would completely close this issue as i have i have divided the solution in two pr as stated in the above comment can you review; it its in working state.