aws-iam-authenticator
aws-iam-authenticator copied to clipboard
No Docker image for version v0.5.27 has been published?
What happened?
I set up a cluster using kops v1.29.0 that tried to use the following image for aws-iam-authenticator
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27
The pods were stuck unable to pull the image and the events when you describe the pods say:
Normal BackOff 52s (x2 over 79s) kubelet Back-off pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27"
Warning Failed 52s (x2 over 79s) kubelet Error: ImagePullBackOff
Normal Pulling 40s (x3 over 80s) kubelet Pulling image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27"
Warning Failed 40s (x3 over 79s) kubelet Failed to pull image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": rpc error: code = NotFound desc = failed to pull and unpack image "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": failed to resolve reference "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27": 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27: not found
Warning Failed 40s (x3 over 79s) kubelet Error: ErrImagePull
If I edited the DaemonSet and changed the image to the previous version of 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.21 then everything works fine.
I think perhaps the image publishing for that release failed?
The goreleaser threw an error in your repo's pipeline when trying to publish
• publishing
• docker images
• pushing image=602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64
⨯ release failed after 5m19s error=docker images: failed to publish artifacts: failed to push 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64 after 0 tries: failed to push 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.27-amd64: exit status 1: no basic auth credentials
The push refers to repository [602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator]
https://github.com/kubernetes-sigs/aws-iam-authenticator/actions/runs/8854199394/job/24316751369
What you expected to happen?
Be able to pull the image and use it for the current 0.5 release.
Anything else we need to know?
No response
Installation tooling
kOps
AWS IAM Authenticator server Version
v0.5.27
Client information
- OS/arch: `Ubuntu 22.04`
- kubernetes client & version: Client = `1.30.1` Server = `1.29.5`
- authenticator client & version: `v0.5.27`
Kubernetes API Version
v1.29.5
aws-iam-authenticator YAML manifest
No response
kube-apiserver YAML manifest
No response
aws-iam-authenticator logs
No response
@dims By any chance, do you know anyone that could look into the image promotion failure? Thanks!
seems the problem I reported has been discussed at https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/722
Ignore below:
another problem,
the latest release (currently it is v0.5.27 )
it has only 2 assets now , both are source code only
can you try 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/authenticator:v0.5.27 instead?
@nnmin-aws The kOps project has a problem with this image missing, as the current release references it. I would appreciate some feedback on how to move forward, because this missing image will break things for many of our users.
discussion on slack - https://kubernetes.slack.com/archives/C0LRMHZ1T/p1716614694690089?thread_ts=1716474632.991109&cid=C0LRMHZ1T
This is holding for quite some time, do you know if there is an ETA for the fix? using kOps as well and as @hakman suggested we are forced to find workarounds for the missing released image
apology for the inconvenience. we will have a new release today. please kindly note v0.5.x is only for k8s version <=1.23 and will stop release since 1.23 has end of life. please kindly pick up v0.6.x. thank you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.