aws-iam-authenticator
aws-iam-authenticator copied to clipboard
[Bug]: Kubernetes API call failed: aws-iam-authenticator failed with exit code 1
What happened?
Hi all, I am getting the following error when deploying a EKS cluster with Flux v2:
2023-07-22 15:14:01 [ℹ] gitops configuration detected, setting installer to Flux v2
2023-07-22 15:14:01 [ℹ] ensuring v1 repo components not installed
2023-07-22 15:14:01 [ℹ] running pre-flight checks
► checking prerequisites
Assume Role MFA token code: could not get token: EOF
✗ Kubernetes API call failed: Get "https://5A859C6D442D2404A238FE3125FD2398.gr7.us-east-1.eks.amazonaws.com/version": getting credentials: exec: executable aws-iam-authenticator failed with exit code 1
Error: running Flux pre-flight checks: exit status 1
The command I am using to deploy it is:
❯ eksctl create cluster --config-file eks-cluster.yaml --profile integration
The block I am using to install Flux v2 in the eks-cluster.yaml file is:
gitops:
flux:
gitProvider: github
flags:
owner: "mycompany"
repository: "appv1"
private: "true"
branch: "dev"
namespace: "flux-system"
read-write-key: "true"
path: "clusters/dev"
NOTE: If I remove the Flux block from the eks-cluster.yaml file, the cluster deploys without issue. And if I then run the following command to install Flux v2, it installs without issue:
❯ flux bootstrap github --owner=mycompany --repository=appv1 --branch=dev --path=clusters/dev
So, It seems then that the issue is in eksctl (or I am doing something wrong) and not with flux.
My question is, why does it fail with the gitops block that is inside the eks-cluster.yaml file?
I appreciate any help in this regard.
Thanks, Julian
What you expected to happen?
I was hoping that there was no error or that it would ask me for the authentication token again.
Anything else we need to know?
No response
Installation tooling
homebrew
AWS IAM Authenticator client version
{"Version":"0.6.10","Commit":"ea9bcaeb5e62c110fe326d1db58b03a782d4bdd6"}
Client information
MacOS Ventura 13.5 Intel
eksctl v0.150.0-dev+cdcf906b7.2023-07-20T12
aws-cli/2.13.5
Kubernetes API Version
kubectl v1.27.4
kubeconfig user
[
{
"name": "aws-go-sdk-1690917158155130000@turbonomic-xl-integration.us-east-1.eksctl.io",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1beta1",
"args": [
"token",
"-i",
"turbonomic-xl-integration"
],
"command": "aws-iam-authenticator",
"env": [
{
"name": "AWS_STS_REGIONAL_ENDPOINTS",
"value": "regional"
},
{
"name": "AWS_DEFAULT_REGION",
"value": "us-east-1"
},
{
"name": "AWS_PROFILE",
"value": "route105"
}
],
"interactiveMode": "IfAvailable",
"provideClusterInfo": false
}
}
}
]
@jfacevedo you may want to open an issue in eksctl repo and cross link it to this one. I don't think this is solvable just by folks working on this repo (it's about how eksctl uses this binary i think)
Hi @dims, I opened a case on eksctl a week ago, but I have not received any response. I thought it might be an issue with aws-iam-authenticator as well. https://github.com/eksctl-io/eksctl/issues/6843
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.