operator-controller
operator-controller copied to clipboard
:sparkles: feat: implement initial benchmark tests
Description
This PR is for implementing benchmarking tests which will be used in CI to evaluate change in performance / compute resource utilization as new features are implemented.
Reviewer Checklist
- [ ] API Go Documentation
- [x] Tests: Unit Tests (and E2E Tests, if appropriate)
- [x] Comprehensive Commit Messages
- [ ] Links to related GitHub Issue(s)
Relates to #920
Deploy Preview for olmv1 ready!
| Name | Link |
|---|---|
| Latest commit | 1075eb334f8efe39ccd0acd826a06d534e4b7f36 |
| Latest deploy log | https://app.netlify.com/sites/olmv1/deploys/66b52ff7e13e21000831fa97 |
| Deploy Preview | https://deploy-preview-1071--olmv1.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Codecov Report
Attention: Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.
Project coverage is 75.23%. Comparing base (
6864b0c) to head (1075eb3). Report is 234 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| internal/rukpak/source/image_registry.go | 0.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #1071 +/- ##
==========================================
- Coverage 75.28% 75.23% -0.06%
==========================================
Files 35 35
Lines 1914 1914
==========================================
- Hits 1441 1440 -1
- Misses 330 331 +1
Partials 143 143
| Flag | Coverage Ξ | |
|---|---|---|
| e2e | 57.41% <0.00%> (-0.06%) |
:arrow_down: |
| unit | 50.78% <0.00%> (ΓΈ) |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
In my observation, these seem to be the largest contributors to the duration of a
Reconcile
Totally agree. I was intent on going through the code base and identifying where we have more CPU / memory intensive tasks. However, to start working on GH actions, I wanted to have something in place.
While reviewing, consider:
- What should be threshold for an increase in computational resource utilization?
- And should we error when above threshold is exceeded?
- Should we add a comment with the benchmark results?
- Informational logs in the function being benchmarked will break the benchmark format. As a workaround, should we remove the comments altogether or suppress them when benchmarks are running?
During a meeting on the benchmarking of operator-controller functions held on 08/13/2024, the following action items were identified:
- [ ] Committing a golden benchmark to the repository and comparing benchmarks when new PRs are created to the golden / reference benchmark
- [ ] Set the threshold as low as possible, preferrably zero, and alert when there is any measurable change
- [ ] Switch the analysis of the benchmark data to use Golang tools / modules from Pandas which is written in Python
- [ ] Fail / Alert when a benchmark is removed from the golden benchmark or when a benchmark which does not exist in the Golden benchmark is added to a PR
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Will be revisited at a later time.