kubevirtci
kubevirtci copied to clipboard
fix: use pre-generated image lists for pre-pulling images
What this PR does / why we need it:
Detection of missing pre-pulled images broke when the provisioning was moved into gocli. Therefore we change from calling fetch-images.sh at runtime to a generated pre-pull-images file.
Changes in this PR:
- generation: script
update-pre-pull-images.shgeneratespre-pull-images, callingfetch-images.sh, which is moved from the version folders into the main provision/k8s folder - simplification: replace the logic of pre-pulling based on fetch-images.sh and extra-... with simply concatting
pre-pull-imagesandextra-pre-pull-imagestext file - safety-net:
pre-pull-imagesneeds to get updated whenever the manifests inside either the k8s version folder or the manifests inside the gocli change - add a call toupdate-pre-pull-images.shto check idempotency (no changes in repo) incheck-cluster-up.sh - install: set some more flags to install cdi and ceph, remove the install-everything script
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
Special notes for your reviewer:
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
/test check-provision-k8s-1.32
@dhiller: The /test command needs one or more targets.
The following commands are available to trigger required jobs:
/test check-gocli/test check-provision-centos-base/test check-provision-k8s-1.30/test check-provision-k8s-1.30-s390x/test check-provision-k8s-1.31/test check-provision-k8s-1.32/test check-provision-k8s-1.32-s390x/test check-provision-manager/test check-up-kind-1.30-vgpu/test check-up-kind-ovn/test check-up-kind-sriov
The following commands are available to trigger optional jobs:
/test check-provision-alpine-with-test-tooling/test check-provision-k8s-1.31-s390x/test check-up-kind-1.28/test check-up-kind-1.31
Use /test all to run the following jobs that were automatically triggered:
check-goclicheck-provision-alpine-with-test-toolingcheck-provision-k8s-1.30check-provision-k8s-1.31check-provision-k8s-1.31-s390xcheck-provision-k8s-1.32check-provision-managercheck-up-kind-1.30-vgpucheck-up-kind-sriov
In response to this:
/test check-provision-k8s-1.32
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32
/hold
waiting for https://github.com/kubevirt/project-infra/pull/3948
updating images through https://github.com/kubevirt/project-infra/pull/3949
/unhold
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32
@brianmcarey I think I have it working now :)
Hm, strange, on all the check-provision the same test fails: [sig-network] [rfe_id:694][crit:medium][vendor:[email protected]][level:component]Networking VirtualMachineInstance with masquerade binding mechanism when performing migration [Conformance] should preserve connectivity - IPv6
Any ideas?
/test check-provision-1.30 /test check-provision-1.31 /test check-provision-1.32
@dhiller: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:
/test check-gocli
/test check-provision-centos-base
/test check-provision-k8s-1.30
/test check-provision-k8s-1.30-s390x
/test check-provision-k8s-1.31
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32-s390x
/test check-provision-manager
/test check-up-kind-1.30-vgpu
/test check-up-kind-ovn
/test check-up-kind-sriov
The following commands are available to trigger optional jobs:
/test check-provision-alpine-with-test-tooling
/test check-provision-k8s-1.31-s390x
/test check-up-kind-1.28
/test check-up-kind-1.31
Use /test all to run the following jobs that were automatically triggered:
check-gocli
check-provision-alpine-with-test-tooling
check-provision-k8s-1.30
check-provision-k8s-1.31
check-provision-k8s-1.31-s390x
check-provision-k8s-1.32
check-provision-manager
check-up-kind-1.30-vgpu
check-up-kind-sriov
In response to this:
/test check-provision-1.30 /test check-provision-1.31 /test check-provision-1.32
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/test check-provision-k8s-1.30 /test check-provision-k8s-1.31 /test check-provision-k8s-1.32
/test check-provision-1.32
@dhiller: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:
/test check-gocli
/test check-provision-centos-base
/test check-provision-k8s-1.30
/test check-provision-k8s-1.30-s390x
/test check-provision-k8s-1.31
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32-s390x
/test check-provision-manager
/test check-up-kind-1.30-vgpu
/test check-up-kind-ovn
/test check-up-kind-sriov
The following commands are available to trigger optional jobs:
/test check-provision-alpine-with-test-tooling
/test check-provision-k8s-1.31-s390x
/test check-up-kind-1.28
/test check-up-kind-1.31
Use /test all to run the following jobs that were automatically triggered:
check-gocli
check-provision-alpine-with-test-tooling
check-provision-k8s-1.30
check-provision-k8s-1.31
check-provision-k8s-1.31-s390x
check-provision-k8s-1.32
check-provision-manager
check-up-kind-1.30-vgpu
check-up-kind-sriov
In response to this:
/test check-provision-1.32
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/test check-provision-k8s-1.32
@oshoval @ormergi maybe you have some ideas about this:
All the check-provision lanes are constantly failing on a specific sig-network Conformance test named
[sig-network] [rfe_id:694][crit:medium][vendor:[email protected]][level:component]Networking VirtualMachineInstance with masquerade binding mechanism when performing migration [Conformance] should preserve connectivity - IPv6 (source)
On the main sig-network lanes that test seemed to have a very mild flakiness, could be test-order-dependency, since that test was not the only one failing.
I have no idea what might be missing here - the only thing I found differing from the k/kubevirt sig-network lane config was something around KUBEVIRT_DEPLOY_NET_BINDING_CNI, but I doubt that this is the reason?
Another note: I found that the IPv6 largely duplicates the IPv4 version of the test - except the section around DHCP for the IPv4 test
I'd appreciate any idea or suggestion ...
Hi, seems the migration fails, maybe worth either only ideas so far are:
- remove it from kci conformance - just relay on the kubevirt one
- add sleep and then connect to the job via admin access that you and Brian have to determine deeper what is the reason ?
@sourcery-ai review
Reviewer's Guide by Sourcery
This pull request refactors the image pre-pulling mechanism by replacing runtime calls to fetch-images.sh with a pre-generated pre-pull-images file, simplifying the process and ensuring consistency across different Kubernetes versions.
Sequence diagram for the updated image pre-pulling process
sequenceDiagram
participant User
participant updateScript as update-pre-pull-images.sh
participant fetchScript as fetch-images.sh
participant prePullFile as pre-pull-images
participant extraPrePullFile as extra-pre-pull-images
User->>updateScript: Run update-pre-pull-images.sh
updateScript->>fetchScript: Call fetch-images.sh for version folder
updateScript->>fetchScript: Call fetch-images.sh for gocli opts
fetchScript-->>updateScript: Return list of images
updateScript->>prePullFile: Write unique images to pre-pull-images
updateScript->>extraPrePullFile: Remove duplicates from extra-pre-pull-images
Class diagram for the updated image pre-pulling scripts
classDiagram
class update-pre-pull-images.sh {
+fetchImages(provision_dir)
+removeDuplicates()
}
class fetch-images.sh {
+checkArgs()
+main()
}
class pre-pull-images
class extra-pre-pull-images
update-pre-pull-images.sh --> fetch-images.sh : uses
update-pre-pull-images.sh --> pre-pull-images : generates
update-pre-pull-images.sh --> extra-pre-pull-images : modifies
File-Level Changes
| Change | Details | Files |
|---|---|---|
| Replace runtime image fetching with pre-generated image lists. |
|
cluster-provision/k8s/check-cluster-up.shcluster-provision/k8s/1.30/k8s_provision.shcluster-provision/k8s/1.31/k8s_provision.shcluster-provision/k8s/1.32/k8s_provision.shcluster-provision/k8s/1.33/k8s_provision.shcluster-provision/k8s/fetch-images.shcluster-provision/gocli/cmd/provision.gocluster-provision/k8s/check-pod-images.shcluster-provision/k8s/update-pre-pull-images.shcluster-provision/k8s/1.30/pre-pull-imagescluster-provision/k8s/1.31/pre-pull-imagescluster-provision/k8s/1.32/pre-pull-imagescluster-provision/k8s/1.33/pre-pull-imagescluster-provision/k8s/fetch-images-exclude-patterns |
| Remove redundant scripts and update image pre-pulling logic. |
|
cluster-provision/k8s/1.31/fetch-images.shcluster-provision/k8s/1.32/fetch-images.shcluster-provision/k8s/1.33/fetch-images.shcluster-provision/k8s/deploy-manifests.sh |
| Enhance image fetching and pre-pulling safety. |
|
cluster-provision/k8s/check-cluster-up.shcluster-provision/k8s/update-pre-pull-images.sh |
Tips and commands
Interacting with Sourcery
- Trigger a new review: Comment
@sourcery-ai reviewon the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
- Generate a GitHub issue from a review comment: Ask Sourcery to create an
issue from a review comment by replying to it. You can also reply to a
review comment with
@sourcery-ai issueto create an issue from it. - Generate a pull request title: Write
@sourcery-aianywhere in the pull request title to generate a title at any time. You can also comment@sourcery-ai titleon the pull request to (re-)generate the title at any time. - Generate a pull request summary: Write
@sourcery-ai summaryanywhere in the pull request body to generate a PR summary at any time exactly where you want it. You can also comment@sourcery-ai summaryon the pull request to (re-)generate the summary at any time. - Generate reviewer's guide: Comment
@sourcery-ai guideon the pull request to (re-)generate the reviewer's guide at any time. - Resolve all Sourcery comments: Comment
@sourcery-ai resolveon the pull request to resolve all Sourcery comments. Useful if you've already addressed all the comments and don't want to see them anymore. - Dismiss all Sourcery reviews: Comment
@sourcery-ai dismisson the pull request to dismiss all existing Sourcery reviews. Especially useful if you want to start fresh with a new review - don't forget to comment@sourcery-ai reviewto trigger a new review! - Generate a plan of action for an issue: Comment
@sourcery-ai planon an issue to generate a plan of action for it.
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
- Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32
/test check-provision-k8s-1.32 check-provision-k8s-1.33 check-provision-k8s-1.31
/test check-provision-k8s-1.32 check-provision-k8s-1.33 check-provision-k8s-1.31
/test check-provision-k8s-1.32 check-provision-k8s-1.33 check-provision-k8s-1.31
/test check-provision-k8s-1.32 check-provision-k8s-1.33 check-provision-k8s-1.31