terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Feature Request: equivalent of `kubectl patch`
Terraform Version
Terraform v0.12.18
Affected Resource(s)
n/a (request for new resource)
In AWS EKS, clusters come "pre-configured" with several things running in the kube-system
namespace. We need to patch those pre-configured things, while retaining any "upstream" changes which happen to be made. (for example: set HTTP_PROXY variables)
kubectl provides the patch
keyword to handle this use-case.
The kubernetes provider for terraform should do the same.
Proposed example (this would add the proxy-environment-variables
ConfigMap to the existing envFrom
list which already contains aws-node-environment-variable-additions
for the container named aws-node
):
resource "kubernetes_patch" "aws-node" {
kind = daemonset
metadata {
name = "aws-node"
namespace = "kube-system"
}
spec {
template {
spec {
container {
name = "aws-node"
envFrom {
[
configMapRef {
name: proxy-environment-variables
}
configMapRef {
name: aws-node-environment-variable-additions
}
]
}
}
}
}
}
}
I have 2 additional use cases for the same feature, both on EKS.
-
If you want to utilise node taints & tolerations for all your nodes, any EKS managed k8s resources e.g. coredns must be patched to tolerate the taints.
-
Fargate on EKS. If you want to run a nodeless cluster and use Fargate to run everything, some EKS managed resources e.g. coredns prevent this via an annotation e.g.
annotations:
eks.amazonaws.com/compute-type: ec2
The annotation must be removed. The ability to patch resources would solve both these use cases and many others.
I have 2 additional use cases for the same feature, both on EKS.
- If you want to utilise node taints & tolerations for all your nodes, any EKS managed k8s resources e.g. coredns must be patched to tolerate the taints.
- Fargate on EKS. If you want to run a nodeless cluster and use Fargate to run everything, some EKS managed resources e.g. coredns prevent this via an annotation e.g.
annotations: eks.amazonaws.com/compute-type: ec2
The annotation must be removed. The ability to patch resources would solve both these use cases and many others.
Yeah also it would be nice to have equivalent of kubectl taint node
as it will not work w/o taint node before
+1
I am currently trying to update an existing ConfigMap and simply add more rules to it but once the CM created it seems that it cannot be referred to in order to be updated.
Any thoughts?
Thanks
+1
+1
When we setup EKS cluster with terraform and are using tainted on-demand nodes for all system services, we have to patch CoreDNS first to make all further installed apps working. For now we can't patch existing EKS CoreDNS with terraform so we have to install 3rd party CoreDNS helm chart at the beginning.
Ability to patch existing deployments would be really great.
+1 Would love this for some of our enterprise EKS Fargate Deployments
It would nice to have such a Terraform resource to patch EKS aws-node
DaemonSet with a custom ServiceAccount. For example, in the case of the IRSA approach usage for Pods authorization.
This is also needed to patch EKS clusters hit by https://github.com/kubernetes/kubernetes/issues/61486
This feature would feed very well into things like custom CNI on EKS
This would also help for management of service meshes such as linkerd or istio, where one might want to add annotations to control mesh proxy injection into the kube-system
or default
namespace.
This request is actually being made in different forms in several issues now, see also:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/238 https://github.com/hashicorp/terraform/issues/22754
For anyone else who's running into this, we've for the moment worked around it with a truly awful abuse of the null resource and local provisioner:
resource "null_resource" "k8s_patcher" {
triggers = {
// fire any time the cluster is update in a way that changes its endpoint or auth
endpoint = google_container_cluster.default.endpoint
ca_crt = google_container_cluster.default.master_auth[0].cluster_ca_certificate
token = data.google_client_config.provider.access_token
}
# download kubectl and patch the default namespace
provisioner "local-exec" {
command = <<EOH
cat >/tmp/ca.crt <<EOF
${base64decode(google_container_cluster.default.master_auth[0].cluster_ca_certificate)}
EOF
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl
./kubectl \
--server="https://${google_container_cluster.default.endpoint}" \
--token="${data.google_client_config.provider.access_token}" \
--certificate_authority=/tmp/ca.crt \
patch namespace default \
-p '{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"linkerd.io/inject":"enabled"},"name":"default"}}'
EOH
}
}
I tweaked @memory's null_resource workaround to work with the aws provider. This should save anyone looking to run fargate-only EKS a bit of time.
resource "aws_eks_fargate_profile" "coredns" {
cluster_name = aws_eks_cluster.main.name
fargate_profile_name = "coredns"
pod_execution_role_arn = aws_iam_role.fargate_pod_execution_role.arn
subnet_ids = var.private_subnets.*.id
selector {
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
}
}
resource "null_resource" "k8s_patcher" {
depends_on = [ aws_eks_fargate_profile.coredns ]
triggers = {
// fire any time the cluster is update in a way that changes its endpoint or auth
endpoint = aws_eks_cluster.main.endpoint
ca_crt = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
provisioner "local-exec" {
command = <<EOH
cat >/tmp/ca.crt <<EOF
${base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)}
EOF
apk --no-cache add curl && \
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/aws-iam-authenticator && chmod +x ./aws-iam-authenticator && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && \
mkdir -p $HOME/bin && mv ./aws-iam-authenticator $HOME/bin/ && export PATH=$PATH:$HOME/bin && \
./kubectl \
--server="${aws_eks_cluster.main.endpoint}" \
--certificate_authority=/tmp/ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
patch deployment coredns \
-n kube-system --type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
EOH
}
}
Any updates?
The following might also be a viable workaround:
resource "local_file" "kubeconfig" {
filename = pathexpand("~/.kube/config")
content = <<-CONFIG
apiVersion: v1
kind: Config
clusters:
- name: clustername
cluster:
server: ${aws_eks_cluster.this.endpoint}
certificate-authority-data: ${aws_eks_cluster.this.certificate_authority.0.data}
contexts:
- name: contextname
context:
cluster: clustername
user: username
current-context: contextname
users:
- name: username
user:
token: ${data.aws_eks_cluster_auth.this-auth.token}
CONFIG
}
Might work quicker since the token should only be requested once and then reused for any kubectl commands.
Also doesn't depend on having aws-cli installed.
Any update?
Same question for me :) : If I want to allow other users than me to manage an Aws Eks cluster, I have to edit the configmap aws-auth. It could be very useful to patch this configmap after a deployment rather than replace it totally.
Adding to the list, patching argocd-cm
ConfigMap to add a private repository. I bootstrap AKS+ArgoCD and I'd like to use a private repos for the apps.
Same issue here, with the need to patch the coredns for taints-tollerations setup and aws-node daemonset for some parameters tweaking (like IP warm target and external SNAT enable).
Will be really nice to be able to get all resources provisioned in one shot by terraform without workarrounds like local-exec
provisioner which does not work on TFE out of the box due to missing kubectl.
This is also relevant when you want to deploy an EKS cluster only running Fargate. you need to patch the existing CoreDNS deployment in order to deploy it as Fargate.
Also needed to simply edit the coredns-custom configmap that is created by default in AKS.
Adding to the list, patching
argocd-cm
ConfigMap to add a private repository. I bootstrap AKS+ArgoCD and I'd like to use a private repos for the apps.
I've got a similar requirement, so until there is a better method, I'm using a template and null resource:
# argocd-cm patch
# https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/file
data "template_file" "argocd_cm" {
template = file(var.argocd_cm_yaml_path)
vars = {
tenantId = data.azurerm_client_config.current.tenant_id
appClientId = azuread_service_principal.argocd.application_id
}
}
# https://www.terraform.io/docs/provisioners/local-exec.html
resource "null_resource" "argocd_cm" {
triggers = {
yaml_contents = filemd5(var.argocd_cm_yaml_path)
sp_app_id = azuread_service_principal.argocd.application_id
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = var.aks_config_path
}
command = <<EOT
kubectl patch configmap/argocd-cm --namespace argocd --type merge --patch "${data.template_file.argocd_cm.rendered}"
EOT
}
depends_on = [
local_file.kubeconfig,
null_resource.argocd_configure
]
}
Adding a use case to this list as well, I'm on GKE and I'm using the terraform code below to create a namespace/secrets to help bootstrap a cluster with service account keys required by the applications. If the secret data changes, I'd like to overwrite the secret, but this fails due to the namespace existing already.
resource "kubernetes_namespace" "default" {
metadata {
name = var.namespace
}
}
resource "kubernetes_secret" "default" {
depends_on = [kubernetes_namespace.default]
metadata {
name = var.kube_secret_name
namespace = var.namespace
}
data = var.secret_data
type = var.secret_type
}
Thanks everyone for your patience on this issue.
We're looking at implementing this feature and are discussing where it should live and what its implementation is going to look like. If you want to add your 2¢ please feel free to contribute your thoughts to this proposal PR: https://github.com/hashicorp/terraform-provider-kubernetes/pull/1257
+1 on this feature request for a generic way to apply patches to existing resources. It would also be helpful to have a prescriptive way to handle the common requirement to create/update specific key/values within an existing ConfigMap
or Secret
.
Please note ability to add annotation to existing resources like service accounts installed by managed services like EKS, there should be
resource "kubernetes_service_account" {
allow_import_if_exists = true
metadata {
name = "aws-node"
namespace = "kube-system"
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
eks.amazonaws.com/role-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AmazonEKSCNIRole>
}
}
https://github.com/hashicorp/terraform-provider-kubernetes/issues/692
Thanks for adding your thoughts @bfelaco @alexmnyc.
I'm collecting acceptance test criteria for this feature. If you have a specific use case you need this for please share with as much detail as possible and add here to the proposal PR linked above. If you are already solving it outside or Terraform or with a null_resource
I'd love to hear about that too.
I can share a sample using null-resources to enable IRSA for the node daemonset as described here: https://aws.github.io/aws-eks-best-practices/security/docs/iam/#update-the-aws-node-daemonset-to-use-irsa
many thanks to @jensbac ! :)
Adding myself to notification :+1: this one