terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
K8S Manifest with env with VALUE_FROM consistently returning errors during Apply
I am trying to use terraform to manage updating the version of a k8s_cni_plugin daemonset in my EKS cluster. I am attempting to move from the 1.11.4 to 1.12.1 version. I am able to manually apply this daemonset to the cluster using kubectl apply without issue.
However, trying to do this with terraform is consistently failing. I have imported the running v1.11.4 daemonset to my terraform configuration with this command:
terraform import module.aws_k8s_cni_plugin.kubernetes_manifest.daemonset_kube_system_aws_node "apiVersion=apps/v1,kind=DaemonSet,namespace=kube-system,name=aws-node"
The import succeeds and I can see it loaded fine in my state file. After the import, I am trying to update the daemonset using this configuration:
resource "kubernetes_manifest" "daemonset_kube_system_aws_node" {
manifest = yamldecode(file("${path.module}/yaml/daemonset.yaml"))
field_manager {
force_conflicts = true
}
}
The yaml file I'm pointing to is the exact yaml that I was able to manually apply to the cluster successfully.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: aws-node
namespace: kube-system
labels:
app.kubernetes.io/name: aws-node
app.kubernetes.io/instance: aws-vpc-cni
k8s-app: aws-node
app.kubernetes.io/version: "v1.12.1"
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
type: RollingUpdate
selector:
matchLabels:
k8s-app: aws-node
template:
metadata:
labels:
app.kubernetes.io/name: aws-node
app.kubernetes.io/instance: aws-vpc-cni
k8s-app: aws-node
spec:
priorityClassName: "system-node-critical"
serviceAccountName: aws-node
hostNetwork: true
initContainers:
- name: aws-vpc-cni-init
image: "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.12.1"
env:
- name: DISABLE_TCP_EARLY_DEMUX
value: "false"
- name: ENABLE_IPv6
value: "false"
securityContext:
privileged: true
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
terminationGracePeriodSeconds: 10
tolerations:
- operator: Exists
securityContext:
{}
containers:
- name: aws-node
image: "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.12.1"
ports:
- containerPort: 61678
name: metrics
livenessProbe:
exec:
command:
- /app/grpc-health-probe
- -addr=:50051
- -connect-timeout=5s
- -rpc-timeout=5s
initialDelaySeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /app/grpc-health-probe
- -addr=:50051
- -connect-timeout=5s
- -rpc-timeout=5s
initialDelaySeconds: 1
timeoutSeconds: 10
env:
- name: ADDITIONAL_ENI_TAGS
value: "{}"
- name: AWS_VPC_CNI_NODE_PORT_SUPPORT
value: "true"
- name: AWS_VPC_ENI_MTU
value: "9001"
- name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
value: "false"
- name: AWS_VPC_K8S_CNI_EXTERNALSNAT
value: "false"
- name: AWS_VPC_K8S_CNI_LOGLEVEL
value: "DEBUG"
- name: AWS_VPC_K8S_CNI_LOG_FILE
value: "/host/var/log/aws-routed-eni/ipamd.log"
- name: AWS_VPC_K8S_CNI_RANDOMIZESNAT
value: "prng"
- name: AWS_VPC_K8S_CNI_VETHPREFIX
value: "eni"
- name: AWS_VPC_K8S_PLUGIN_LOG_FILE
value: "/var/log/aws-routed-eni/plugin.log"
- name: AWS_VPC_K8S_PLUGIN_LOG_LEVEL
value: "DEBUG"
- name: DISABLE_INTROSPECTION
value: "false"
- name: DISABLE_METRICS
value: "false"
- name: DISABLE_NETWORK_RESOURCE_PROVISIONING
value: "false"
- name: ENABLE_IPv4
value: "true"
- name: ENABLE_IPv6
value: "false"
- name: ENABLE_POD_ENI
value: "false"
- name: ENABLE_PREFIX_DELEGATION
value: "false"
- name: WARM_ENI_TARGET
value: "1"
- name: WARM_PREFIX_TARGET
value: "1"
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
cpu: 25m
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /host/var/log/aws-routed-eni
name: log-dir
- mountPath: /var/run/aws-node
name: run-dir
- mountPath: /run/xtables.lock
name: xtables-lock
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
- name: log-dir
hostPath:
path: /var/log/aws-routed-eni
type: DirectoryOrCreate
- name: run-dir
hostPath:
path: /var/run/aws-node
type: DirectoryOrCreate
- name: xtables-lock
hostPath:
path: /run/xtables.lock
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
When attempting to apply this configuration, plan and validate both work fine. However, during the apply stage I am consistently receiving this error:
Error: spec.template.spec.containers[0].env[21].valueFrom
with module.aws_k8s_cni_plugin.kubernetes_manifest.daemonset_kube_system_aws_node,
on modules/aws_k8s_cni_plugin/manifests.tf line 1, in resource "kubernetes_manifest" "daemonset_kube_system_aws_node":
1: resource "kubernetes_manifest" "daemonset_kube_system_aws_node" {
Invalid value: "": may not be specified when `value` is not empty
I have tried using the kubernetes_daemonset resource instead of a manifest but received the same error. I have also tried adding the valueFrom block as a computed_field with no success.
Any input on why this error is happening or ideas to fix it would be greatly appreciated!
Hi @adamdepollo,
The error message that you observe happens when an environment variable has both value and valueFrom defined. In order to better understand why that happens please share debug output: https://developer.hashicorp.com/terraform/internals/debugging
Thank you.
Hi @arybolovlev
Sorry for the delay, created a gist here with the output. This is fairly truncated but hopefully the relevant info is there. https://gist.github.com/adamdepollo/89545c3f27c81a1688e95ddf141719f9
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!