deploy: Change imagestreams importMode to work with multi-arch compute clusters
On multi-arch compute clusters, the pods can land on any arch type of worker node. Making sure that the imagstreams import the manifestlist will help make sure the deployment succeeds on any node that it lands on.
/retest
/retest
/retest
@soltysh pretty simple change - if you could approve, would be great.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Prashanth684 Once this PR has been reviewed and has the lgtm label, please assign bparees for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@Prashanth684: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-azure-ovn-etcd-scaling | 00f381a69d5c68dec4dfc19a67ccd4349e491da8 | link | false | /test e2e-azure-ovn-etcd-scaling |
| ci/prow/e2e-gcp-ovn-etcd-scaling | 00f381a69d5c68dec4dfc19a67ccd4349e491da8 | link | false | /test e2e-gcp-ovn-etcd-scaling |
| ci/prow/e2e-vsphere-ovn-etcd-scaling | 00f381a69d5c68dec4dfc19a67ccd4349e491da8 | link | false | /test e2e-vsphere-ovn-etcd-scaling |
| ci/prow/e2e-aws-ovn-single-node-upgrade | 6f8ced8d8b7b0f1b11cb621fa4808227d521fe96 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/e2e-aws-ovn-single-node-serial | 6f8ced8d8b7b0f1b11cb621fa4808227d521fe96 | link | false | /test e2e-aws-ovn-single-node-serial |
| ci/prow/e2e-aws-ovn-single-node | 6f8ced8d8b7b0f1b11cb621fa4808227d521fe96 | link | false | /test e2e-aws-ovn-single-node |
| ci/prow/e2e-aws-csi | 6f8ced8d8b7b0f1b11cb621fa4808227d521fe96 | link | false | /test e2e-aws-csi |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/hold
The changes here suggest that something broke compatibility-wise with our CLI and possibly our server. I'd like to understand why these changes are necessary before merging them. Also, very glad we had tests that demonstrate this compatibilty breakage.
@deads2k The intent of these changes was to handle the case of testing across mismatched architectures.
Our intent behind the original https://github.com/openshift/release/pull/40722 was to ensure that e2e tests running from an x86 build cluster would still be able to pass tests that were run on a target arm64 cluster.
With OpenShift 4.13, the image streams now support manifest lists. However, the default behavior when importing an image is to only import the image that matches the client architecture. This causes a problem when your test-pod architecture doesn't match your target cluster architecture because the image you've imported is incompatible.
The solution is to specify the import of the original manifest (which is a manifest list). This allows the cluster to resolve the correct image upon inspection.
@Prashanth684 do you know more about the actual changes to the oc CLI for context?
With OpenShift 4.13, the image streams now support manifest lists. However, the default behavior when importing an image is to only import the image that matches the client architecture. This causes a problem when your test-pod architecture doesn't match your target cluster architecture because the image you've imported is incompatible.
The above reason was why the changes were introduced - when deploying a multi-arch cluster with the multi payload, we need to ensure that oc commands in general import the manifestlist rather than a single manifest so it can support any architecture.
@deads2k The functionality of the flag is itself backward compatible.
If the target image stream is a single arch manifest, it does nothing. If the target image stream is a manifest list, it ensures the list is pulled.
The reason the arm sdn tests began to fail was that we changed the payload (around last week) to use a manifest list instead of the single-arch manifest. It used to work because even though the test pod is in an x86 cluster, the manifest it was importing was a single-arch manifest for arm, which is compatible with the target cluster.
However, the default behavior when importing an image is to only import the image that matches the client architecture.
Does this update indicate that we actually want the default to be based on the server architecture, not the client architecture? We want the pods using this image to succeed and those pods aren't running on the client.
However, the default behavior when importing an image is to only import the image that matches the client architecture.
Does this update indicate that we actually want the default to be based on the server architecture, not the client architecture? We want the pods using this image to succeed and those pods aren't running on the client.
That statement is incorrect - the default behavior when importing an image is to import the manifest of the architecture that matches the control plane's architecture. this is because the apiserver runs on the control plane and uses GOARCH to filter the manifest. This update makes is so that - when a manifestlist is imported, it imports the manifestlist and does not import only a single arch manifest.
Does this update indicate that we actually want the default to be based on the server architecture, not the client architecture? We want the pods using this image to succeed and those pods aren't running on the client.
Say you have a cluster with multiple compute nodes - which architecture is your server architecture? What's reported is the architecture of the control plane. So if your compute nodes don't match, then the server arch doesn't work either.
Also based on Prashanth's note, it appears I was not quite on target with how the default import arch works.
-- Edit/Side Note: The reason I thought the client arch was relevant was because of a recent fix we landed in CI tools. Now I realize that reason that arch didn't match was that the import happens against the build farm cluster (build01 - x86_64), not because that pod/client is x86_64. Since the cluster under test is all arm64, it would have worked correctly if run on the arm64 build farm, since it would have gotten the server-arch for arm01.
This update makes is so that - when a manifestlist is imported, it imports the manifestlist and does not import only a single arch manifest.
Why does this happen? Do our clusters have masters in architecture/A and workers in arch/B? If so, I'd like to see the test gated so that it only adds thsi argument to the heterogenous case.
Why does this happen? Do our clusters have masters in architecture/A and workers in arch/B?
We currently have 2 jobs that run with multi-arch compute nodes (i.e. heterogeneous clusters). They currently run with 3 amd64 control-plane nodes, 3 amd64 compute nodes, and 2 arm64 compute nodes.
So this test would pass about 3/5 times without this flag.
If so, I'd like to see the test gated so that it only adds thsi argument to the heterogenous case.
This flag doesn't affect imports unless the image stream is using manifest lists. What concern would we be addressing by not including it?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
@deads2k reviving this thread - how do we want to proceed here? given that the impact of changing importMode is minimal , i.e the images will only import manifestlists when the payload used is a multi payload with the exception of the redis and postgres image which will always be imported as manifestlists (as they are not associated to the payload).
Would it be acceptable for you to allow this?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.