terraform-provider-helm
terraform-provider-helm copied to clipboard
Execplugin executable aws not found
Terraform, Provider, Kubernetes and Helm Versions
Terraform version: 1.17 & 1.19 using Terraform Cloud VCS Driven workflow
Provider version: 2.4.1
Kubernetes version: 1.22
Affected Resource(s)
- helm_release
Terraform Configuration Files
provider "aws" {
region = var.region
}
provider "helm" {
kubernetes {
host = module.cluster.cluster_endpoint
token = module.cluster.cluster_auth
cluster_ca_certificate = module.cluster.cluster_ca
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", module.cluster.cluster_name, "--region", var.region]
command = "aws"
env = { "AWS_STS_REGIONAL_ENDPOINTS" : "regional" }
}
}
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
Debug Output
Error: Kubernetes cluster unreachable: Get "[[cluster API Endpoint].eks.amazonaws.com/version":] getting credentials: exec: executable aws not found It looks like you are trying to use a client-go credential plugin that is not installed. To learn more about this feature, consult the documentation available at: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
Panic Output
Steps to Reproduce
- Create Helm release resource
- Commit to repo or manually trigger run in Terraform cloud'
- Allow Plan to run and confirm apply
Expected Behavior
Exec command should run aws eks command to retrieve the authentication token.
Actual Behavior
Fails to run command with a command not found error.
Important Factoids
We began experiencing this issue suddenly while testing new EKS cluster builds. We rolled back our commit to the most recent successful build, however we continued to receive the error above. We tried using different versions of terraform and the providers, created fresh workspaces, and tried building in different AWS accounts. Additionally in the Cluster API logs we can see that the Terraform user is authenticated and authorized. It appears that wherever the exec plugin is running is experiencing an issue. We have been using this provider configuration successfully for 2 months without issue until now. We have found a temporary workaround to pass a token directly to the Helm provider, but this may cause issues with extended sessions.
provider "helm" {
kubernetes {
host = module.cluster.cluster_endpoint
token = module.cluster.cluster_auth
cluster_ca_certificate = module.cluster.cluster_ca
}
}
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
The exec plugin configuration requires the actual plugin binary to be present. In case of AWS, this is actually the aws
cli tool.
Do you have the aws
cli tool available in the PATH for Terraform to use during it's operation?
We are using TF Cloud, so my understanding is that all commands are being run on a remote worker. We do have the AWS Provider configured. Also, the exec plugin worked for approximately two months, and is the suggested method of connecting to EKS clusters based on this documentation from Hashicorp https://registry.terraform.io/providers/hashicorp/helm/latest/docs. The "command not found" portion of the error would indicate that the worker is not having the cli installed on it properly, however it doesn't seem that we as end users can install it.
I no longer have access to a paid Terraform cloud account to test, but if they did remove the AWS binary from the cloud runners you are able to install it: https://www.terraform.io/cloud-docs/run/install-software#only-install-standalone-binaries
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!