chore(config/jobs): workaround #1158
Proposed way to workaround #1158 , even if this is not a real fix: trigger all build-drivers jobs on any new config. See https://github.com/falcosecurity/test-infra/issues/1158#issuecomment-1710018485
PROs:
- super easy workaround
CONs:
- we will spawn useless pods
- we will spawn more nodes than needed
However, please note that:
- nowadays, since kernel-crawler automation was implemented, we rarely have PRs with a single config being added (it did not happen in 1yr)
- normally, we trigger the build of drivers when: either the kernel-crawler automation triggers, or we add a new driver version; those jobs normally trigger most of our jobs (~90/100%), therefore spawning all pods in this case would make no real difference
- Only case where we would scale badly is when we manually run the kernel-crawler automation against a single distro; in this case, instead of only triggering the distro build-drivers jobs, we would trigger all (even if
skip existingprevent us from rebuilding everything of course)
In this PR, i trigger indifferently x86_64 and aarch64 jobs, since when eg: we add a new driver version, we basically only trigger aarch64 jobs before the Github API cuts our list of changed files. In that case, to actually trigger x86_64 jobs, we need this fix.
/hold for discussion /cc @maxgio92 @zuc
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: FedeDP
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~config/jobs/build-drivers/OWNERS~~ [FedeDP]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@FedeDP see https://github.com/falcosecurity/test-infra/issues/1158#issuecomment-1719314556
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
@FedeDP: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| peribolos-syncer-pre-submit-test | 3f0d56e6e4d07d92eaf8df7e23f0dd5d6ab68535 | link | true | /test peribolos-syncer-pre-submit-test |
Full PR test history. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale