terraform-aws-eks icon indicating copy to clipboard operation
terraform-aws-eks copied to clipboard

assume_role_with_web_identity only seems to work on first run. then get permission errors

Open md850git opened this issue 1 year ago β€’ 5 comments

Description

i have built an eks cluster with the latest version of the module and a helm chart on first apply. (im using jenkins in GKE to deploy AWS resources ) using access entries and builds fine. if i then attempt to deploy further helm charts or further resources like a kubernetes resource i get permission errors and it defaults to the service account of the jenkins pod running on GKE. rather than using the web identity which is in the provider config like so:

provider "aws" { region = "us-east-1" assume_role_with_web_identity { role_arn = "arn:aws:iam::${var.account_id}:role/myrole" session_name = "session name" web_identity_token_file = "token.txt" }

ive removed the kubernetes provider as believed that this wasn't needed in v20??. is there a similar setup for helm so that the helm provider isn't needed too?

  • [ x] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

Versions

  • Module version [Required]: v20
  • Terraform version: "= 1.6.6"
  • Provider version(s): source = "hashicorp/aws" version = "5.67.0"

Reproduction Code [Required]

resource "kubernetes_namespace_v1" "this" { metadata { name = "argocd" } }

Steps to reproduce the behavior:

Expected behavior

a k8s namespace is created

Actual behavior

Error: namespaces is forbidden: User "system:serviceaccount:REDACTED" cannot create resource "namespaces" in API group "" at the cluster scope

Terminal Output Screenshot(s)

Error: namespaces is forbidden: User "system:serviceaccount:REDACTED" cannot create resource "namespaces" in API group "" at the cluster scope

Additional context

md850git avatar Sep 17 '24 19:09 md850git

the AWS provider you have shown is for creating AWS resources - however, your errors are occurring at the Kubernetes/cluster level. I don't see any Kubernetes or Helm providers (nor a reproduction) so it will be hard to say what is mis-configured

in general though - this seems to be an issue with your provider authentication and not with the module

bryantbiggs avatar Sep 17 '24 19:09 bryantbiggs

yeah so guessing the kubernetes provider isnt using the assume role from the main calling project

data "aws_eks_cluster" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }

data "aws_eks_cluster_auth" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }

provider "kubernetes" { host = data.aws_eks_cluster.eks.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority.0.data) token = data.aws_eks_cluster_auth.eks.token }

i have configured like so but im not sure the data resources are using teh assume role correctly from the provider in the main calling project.

i believe this is the equivalent of exec out to the aws cli

md850git avatar Sep 17 '24 19:09 md850git

yeah so guessing the kubernetes provider isnt using the assume role from the main calling project

I don't know what you mean by this. users have to tell the providers how to authenticate - the module does not do anything in terms of providers or authentication

bryantbiggs avatar Sep 17 '24 20:09 bryantbiggs

i figured this: data "aws_eks_cluster" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }

data "aws_eks_cluster_auth" "eks" { name = module.eks.cluster_name depends_on = [ module.eks.eks_managed_node_groups, ] }

would be ran using the aws provider config which includes assume role config). provider "aws" { region = "us-east-1" assume_role_with_web_identity { role_arn = "arn:aws:iam::${var.account_id}:role/myrole" session_name = "session name" web_identity_token_file = "token.txt" }

md850git avatar Sep 17 '24 20:09 md850git

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] avatar Oct 18 '24 00:10 github-actions[bot]

This issue was automatically closed because of stale in 10 days

github-actions[bot] avatar Oct 28 '24 00:10 github-actions[bot]

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar Nov 28 '24 02:11 github-actions[bot]