enable multiarch support in prowjobs
I introduced a new label called preset-enable-multiarch-support: "true" that we can set on prowjobs that will allow them to tolerate the taint set on the arm64 nodes and run on them sometimes.
The intention is to attach the labels on jobs that don't care about the architecture, such as linting, verify, e2e runner jobs, etc
In the near future, the EKS cluster will also have Graviton(arm64) nodes enabled as well, so we can apply the same strategy there too.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: upodroid
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [upodroid]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Test failure is being fixed at https://github.com/kubernetes/test-infra/pull/35530/files#diff-4c91473011b71c39bad67a8fbeef555b3a79182241ddfc6b0257b11fe3d04efa
/retest
I introduced a new label called preset-enable-multiarch-support: "true" that we can set on prowjobs that will allow them to tolerate the taint set on the arm64 nodes and run on them sometimes.
The intention is to attach the labels on jobs that don't care about the architecture, such as linting, verify, e2e runner jobs, etc
This could actually introduce some really surprising failures, the linting jobs do actually typically lint for the host platform, and e2e runner jobs will build for the host platform assuming it is also the target platform.
For example, when we run golangci-lint, we run it for the host architecture, it's impractical to run it for all, and that means we are going to get some different code and behavior because of code restricted by platform.
Randomizing the hosts can make for difficult to debug failures, we don't surface this very well in CI, see previously with the kernel issues and the ipv6 jobs on mid-upgrade nodepool. Most of our contributors will not see what's going on there.
It means the results are less comparable over time, I don't think we should do that with most of our CI.
Something like, launching the cloudbuilds, sure.
But basically anything that builds ... (including linting, it parses for the host platform typically) fuzzing the architecture in between runs is ... a bit chaotic.
cc @kubernetes/sig-testing-leads
It means the results are less comparable over time, I don't think we should do that with most of our CI.
It will be an opt-in feature and jobs that compile/build artefacts aren't generally not a good fit.
We have many jobs that launch e2e tests on clouds and ideally, they would benefit from this change, specifically the periodic jobs that don't build kubernetes and fetch them the CI buckets.
Friendly bump, for now I want to enable the label and opt in lint jobs in k/test-infra