Autobump prow images
Instruct the generic-autobumper to handle the Prow images in config/prow/config.yaml.
Most likely, what follows is the list of images that will periodically bumped:
$ grep -nF 'docker.pkg.dev' config/prow/config.yaml
19: clonerefs: "us-docker.pkg.dev/k8s-infra-prow/images/clonerefs:v20240802-66b115076"
20: initupload: "us-docker.pkg.dev/k8s-infra-prow/images/initupload:v20240802-66b115076"
21: entrypoint: "us-docker.pkg.dev/k8s-infra-prow/images/entrypoint:v20240802-66b115076"
22: sidecar: "us-docker.pkg.dev/k8s-infra-prow/images/sidecar:v20240802-66b115076"
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: danilo-gemoli Once this PR has been reviewed and has the lgtm label, please assign cblecker for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
(or can we stop explicitly setting them and get a default from there?)
I can't find anything referencing those images in kubernetes/k8s.io:
$ grep -RP 'us\-docker\.pkg\.dev.*(clonerefs|initupload|entrypoint|sidecar)' src/github.com/kubernetes/k8s.io/ | wc -l
0
by the way, don't we already have an autobumper configured for it? Job here and config here. Maybe I'm missing something.
I can't find anything referencing those images in kubernetes/k8s.io:
Not these images, but the overall prow ""release"", I'm not sure if prow supports skew between these images and the controllers that create modified pods using them. Those controllers are in the same staging registry and bumped together at once in k8s.io
Previously the prow deployment was also in this repo so all of the images bumped together.
cc @petr-muller @upodroid 👀
(Also, if prow had the ability to fallback to using the current release, then I think it would make more sense to stop explicitly configuring this across-repos, but I suspect it doesn't and that might be problematic until we have a real release host and not just staging anyhow)
We should bump them together, the assumption was they were in the same repo but we split them up.
We should bump them together, the assumption was they were in the same repo but we split them up.
Maybe we can write a frequent, auto-merged, autobump job that just copies the version from k/k8s.io to these particular images?
A bit rube-goldberg but ...
Though, even then, there might be some breaking change in prow that requires bumping them together and then we're in trouble ...
Maybe we can pull that out to a seperate config ...
(we might also want to do something about the lint presubmit taking twice as long as the unit tests in the case that we're autobumping here to match k/k8s.io https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/test-infra/34245/pull-test-infra-verify-lint/1885347342421331968)
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Fixed in https://github.com/kubernetes/test-infra/pull/34993