project-infra
project-infra copied to clipboard
flake-stats: add number of merged PRs per day to box
Is your feature request related to a problem? Please describe: The flake-stats report1 is our main tool to look at flaky tests during the SIG-CI meetings currently. It's been proven extremely useful to get a quick overview about where things go wrong. This is achieved by the overviews that aggregate failures per day and per lane.
Now, sometimes it seems that there are a lot of failures that have occurred on one day, solely being caused by the fact that a lot of PRs have been merged on that day - reason of this is that the sum of failures per day is taken from the flakefinder report which aggregates all the failures of a merged PR towards the day it was merged, and not towards the day the failure itself occurred.
[!NOTE] Example: a PR that was created on Dec 21st and is merged on Jan 1st, during that period it's test lanes had been run several times, and there were some unit test failures on those runs. Now, all the failures are accounted towards the day the PR is getting merged, so that exactly that day looks as if a lot of failures occurred at that day - which again is not the case.
Describe the solution you'd like: By adding the number of merged PRs to a day's numbers we can better spot anomalies - i.e. a high number of test failures even though the number of merged PRs is low might indicate that a flaky or unstable test got introduced.
Thus we want to show the number of PRs that have been merged on the day - added to the box that shows the failures for that day. Sketch:
| Mon, 26 Aug 2024 | |
|---|---|
| Failures | 42 |
| PRs merged | 10 |
Additional context:
- flake-stats code: https://github.com/kubevirt/project-infra/tree/main/robots/cmd/flake-stats
- flake-stats Go Template location: https://github.com/kubevirt/project-infra/blob/a609d2f66961a57e6a447e29151f44bf6cf14464/robots/cmd/flake-stats/flake-stats.gohtml#L32
- The number of PRs (also the actual PR numbers) is available from the json element "prNumbers"
/good-first-issue
@dhiller: This request has been marked as suitable for new contributors.
Guidelines
- No Barrier to Entry
- Clear Task
- Solution Explained
- Provides Context
- Identifies Relevant Code
- Gives Examples
- Ready to Test
- Goldilocks priority
- Up-To-Date
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@dhiller I'll take a look. Thank you !! /assign
@dhiller Do we need the extra row
PRs merged 10
Under per day heading only ?
@dhiller Do we need the extra row
PRs merged 10Under
per dayheading only ?
Yes, I think that's good enough for now - the goal is to make readers understand that the sheer amout of errors might be in correlation to the number of PRs that have been merged on that day.
Also there's no such number to show for the periodic jobs, which might confuse the reader.
^^ @anishbista60
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle rotten
Still an enhancement we would like to see
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen /remove-lifecycle rotten /lifecycle frozen
@dhiller: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/unassign
/unassign anishbista60
No work activiy yet, let's free this for someone to tackle it