cloud-provider-aws
cloud-provider-aws copied to clipboard
Missing image referenced in chart values & manifests for v1.21.2
What happened:
Attempted deployment using manifests in the v1.21.2
branch
The cloud-controller-manager
image does not exist so the pod has ImagePullBackOff
What you expected to happen:
The app should run using the appropriate image
How to reproduce it (as minimally and precisely as possible):
docker pull gcr.io/k8s-staging-provider-aws/cloud-controller-manager:v1.21.0-alpha.0
Anything else we need to know?:
These images do exist:
- registry.k8s.io/provider-aws/cloud-controller-manager:v1.21.0-alpha.0
- registry.k8s.io/provider-aws/cloud-controller-manager:v1.21.1
- registry.k8s.io/provider-aws/cloud-controller-manager:v1.21.2
- gcr.io/k8s-staging-provider-aws/cloud-controller-manager:v1.21.1
- gcr.io/k8s-staging-provider-aws/cloud-controller-manager:v1.21.2
Should the appVersion
in Chart.yaml
and image tags in values.yaml
& manifests use the latest version v1.21.2
?
Environment:
- Kubernetes version (use
kubectl version
): v1.21.11
/kind bug
@DavidRayner: This issue is currently awaiting triage.
If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
v1.21.2 image should be available now: https://console.cloud.google.com/gcr/images/k8s-artifacts-prod/us/provider-aws/cloud-controller-manager@sha256:454387af4fbfae72205b2b571a96bac2e35af6e1319c179207013b2a1ca5d510/details
That image was available before I raised this issue. The problem is that manifests/base/aws-cloud-controller-manager-daemonset.yaml
and charts/aws-cloud-controller-manager/values.yaml
both reference an image tag that does not exist:
https://github.com/kubernetes/cloud-provider-aws/blob/release-1.21/manifests/base/aws-cloud-controller-manager-daemonset.yaml#L31 https://github.com/kubernetes/cloud-provider-aws/blob/release-1.21/charts/aws-cloud-controller-manager/values.yaml#L8
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.