terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_secret produces inconsistent final plan
When creating a kubernetes_secret with mutltiple file on the first run then Terraform throws the following error:
Error: Provider produced inconsistent final plan
When expanding the plan for kubernetes_secret.tls_secret to include new values
learned so far during apply, provider "registry.terraform.io/-/kubernetes"
produced an invalid new value for .data: inconsistent values for sensitive
attribute.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Terraform Version
Terraform: v0.12.20 K8s provider version: v1.11.1
Affected Resource(s)
- kubernetes_secret
Terraform Configuration Files
resource "kubernetes_secret" "tls_secret" {
type = "kubernetes.io/tls"
metadata {
name = var.tls_secret_name
}
data = {
"tls.crt" = file("${path.module}/resources/gcp.crt")
"tls.key" = file("${path.module}/resources/gcp.key")
}
}
Expected Behavior
Resource should create without throwing an error
Actual Behavior
First run produces the following error:
Error: Provider produced inconsistent final plan
When expanding the plan for kubernetes_secret.tls_secret to include new values
learned so far during apply, provider "registry.terraform.io/-/kubernetes"
produced an invalid new value for .data: inconsistent values for sensitive
attribute.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
This issue goes away on the second run
Steps to Reproduce
-
terraform init
-
terraform plan -out=test.plan
-
terraform apply
Hi, did you solve the problem? I'm running in the same problem, even on a second run. I'm using terraform version v0.12.24 and tls-files created by terraform acme provider.
Error: Provider produced inconsistent final plan When expanding the plan for kubernetes_secret.tls-rancher-ingress-key to include new values learned so far during apply, provider "registry.terraform.io/-/kubernetes" produced an invalid new value for .data: inconsistent values for sensitive attribute. This is a bug in the provider, which should be reported in the provider's own issue tracker.
Hey,
same problem here even with a secret with just one file.
resource "kubernetes_secret" "cert-manager-secret" {
metadata {
name = "secret-name"
namespace = kubernetes_namespace.cert_manager.metadata.0.name
}
type = "Opaque"
data = {
"key.json" = data.template_file.cert_secret.template
}
}
Terraform Version is v0.12.24 Kubernetes Provider Version is v1.11.1
Hi,
i tried to reproduce this bug by performing the above steps and this is working absolutely fine for me. Is there anything we are missing to reproduce this?
Same here.
Terraform: v0.12.28 K8s provider version: v1.11.3
The certificates (crt and key) files are different between 'plan' and 'apply'. This is intended in our case.
resource "kubernetes_secret" "xxxxx" {
count = var.toogle == "true" ? 1 : 0
metadata {
name = "xxxxx"
namespace = "xxxx"
}
data = {
"tls.crt" = file("${path.module}/path/file.crt")
"tls.key" = file("${path.module}/path/file.key")
}
type = "kubernetes.io/tls"
+ resource "kubernetes_secret" "xxxxx" {
+ data = (sensitive value)
+ id = (known after apply)
+ type = "kubernetes.io/tls"
Is there any way to force the "(sensitive value)" to something like "(known after apply)" ?
Thanks!!
Hi all, also hit this issue. For me, it happened when I am trying to form files with sensitive data during runtime.
- Get the secret data from the source
- Prepare data with a local-exec module (decrypt, replace placeholders with actual data in the local file)
- Trying to give formed on step 2 file as an input to the kubernetes_secret module.
- Observing the issue.
I am also hitting this issue. In my case I'm creating new Kubernetes resources
Terraform version: 0.13.5 Kubernetes provider version: 1.13.3
I'm trying to narrow it down further, but so far I've found that a template like this works fine:
resource "kubernetes_secret" "secret" {
metadata {
name = "secret_name"
namespace = kubernetes_namespace.namespace.metadata[0].name
}
type = "Opaque"
data = {
api-key = "secret"
}
}
But when I used this it didn't work:
resource "kubernetes_secret" "secret" {
metadata {
name = "secret_name"
namespace = kubernetes_namespace.namespace.metadata[0].name
}
type = "Opaque"
data = {
api-key = var.api_key
}
}
In this case the api_key
variable is being passed into the Terraform module using the following:
module "module_name" {
count = var.enable_module ? 1 : 0
source = "./module/path/here"
api_key = data.aws_kms_secrets.secret[count.index].plaintext["api_key"]
}
data "aws_kms_secrets" "secret" {
count = var.enable_module ? 1 : 0
secret {
name = "api_key"
payload = var.encrypted_api_key
}
}
I haven't tried other data resources so far. I'll keep investigating and post an update if I spot anything of interest
Update:
So far I've found that removing the count from resources had no effect, tried it just in case! However, switching from aws_kms_secrets
data source to null_data_source
worked. So the secret definition was the same as above but the following was passed into the module:
module "module_name" {
count = var.enable_module ? 1 : 0
source = "./module/path/here"
api_key = data.null_data_source.secret[0].outputs["api_key"]
}
data "null_data_source" "secret" {
count = var.enable_module ? 1 : 0
inputs = {
api_key = var.encrypted_api_key
}
}
Unsure what this means at the moment, perhaps it's not actually anything to do with this provider and its an issue with other providers? I'll keep investigating!
Update 2: When I create a new EKS cluster in a separate AWS account and use the same resources I can successfully add a secret. So I can currently only replicate this in a staging environment, not in any new environments.
Running into the same issue w/ following simple secret creation
resource "kubernetes_secret" "rabbitmq" {
metadata {
name = "rabbitmq-admin-credentials"
namespace = "rabbitmq"
labels = {
"app.kubernetes.io/managed-by" = "terraform"
}
}
data = {
username = data.cloudamqp_credentials.credentials.username
password = data.cloudamqp_credentials.credentials.password
host = "${replace(cloudamqp_instance.instance.host, ".rmq.", ".in.")}"
external_host = cloudamqp_instance.instance.host
vhost = cloudamqp_instance.instance.vhost
apikey = cloudamqp_instance.instance.apikey
}
}
First time, it got made fine, and having this error on subsequent runs.
My issue is resolved. I'm not sure what went wrong but the encrypted data I was passing into the aws_kms_secrets data resource was corrupt
Maybe this can help https://www.terraform.io/docs/extend/terraform-0.12-compatibility.html#inaccurate-plans I have the same issue BTW
I tried with 0.15.1 and 0.15.4. My workflow is also different as I'm importing existing infra to terraform.
- I do import secret first (with success)
- I run plan (it says binary_data will we added)
- apply fails with the same error message.
When I rename the secret and create a new, it works flawlessly.
$ terraform -v
Terraform v0.15.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v3.69.0
+ provider registry.terraform.io/hashicorp/helm v2.1.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.2.0
My setup:
resource "kubernetes_secret" "my_binary_secrets" {
for_each = {
"my-secret" = {
"some.jks" = data.google_kms_secret.some_keystore.plaintext
}
}
metadata {
name = each.key
namespace = kubernetes_namespace.uk.metadata[0].name
}
type = "Opaque"
binary_data = each.value
}
// ...
data "google_kms_secret" "some_keystore" {
crypto_key = data.terraform_remote_state.keys.outputs.prod_secrets_id
ciphertext = "ommitted"
}
Similar issue on 0.14.11? We have
resource "kubernetes_secret" "kubernetes_dashboard_csrf" {
metadata {
name = "kubernetes-dashboard-csrf"
namespace = local.namespace
labels = local.dashboard_tags
}
type = "Opaque"
data = {
csrf = "replace_me"
}
lifecycle {
ignore_changes = [data]
}
}
When running plan then apply on the generated plan, we get issues if the secret data
has changed (even though we explicitly ignore changes).
We solved our issue by doing ignore_changes = all
but ideally we'd like to just be able to ignore the data. For reference, here is the secret yaml after the out-of-terraform mutation:
apiVersion: v1
data:
csrf: ok7HB0zJnyW8rfnSDiJgIH4NzkJTt8F4645uOwYNxSHjnr2fxnQX3HRk4VQeql0h45muEPuVGU7BLTwrFlBS1LUJiboMUuxDZGoFU/hYmgwTYI+iZB8OgwmAq9cjcoAWJ738QzigKzK9Z7AyTPD5h7aVbxJCTjKzCu9brbUhzYVYJLs/iaQoKrFObztn9UZ3IXn08QcyATuPfIjmCTCyj0qsCBFObxNxBcYFo5M3t1EvULBkGCrq+Px8K6p2fKHM6NtvnVaION8KRgCo8rUQ5zoCoEJZgadhbwxAHVJZFS1lZgMml4gfDogIjUyu0coPZfHTaZUWz/3bqeJNniWSCg==
kind: Secret
metadata:
creationTimestamp: "2021-05-24T22:46:02Z"
labels:
app: polaris-infra
module: kube_dashboard
submodule: dashboard
terraform: "true"
version: LOCAL
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data: {}
f:metadata:
f:labels:
.: {}
f:app: {}
f:module: {}
f:submodule: {}
f:terraform: {}
f:version: {}
f:type: {}
manager: HashiCorp
operation: Update
time: "2021-05-24T22:46:02Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
f:csrf: {}
manager: dashboard
operation: Update
time: "2021-05-24T22:46:38Z"
name: kubernetes-dashboard-csrf
namespace: dashboard
resourceVersion: "111898"
uid: 45ba123d-1a4a-4a23-8056-8ddffed2f306
type: Opaque
This just popped up for me as well. Any other W/A besides ignore_changes?
2 years latter it still happens with Terraform 1.2.6
(terraform-containers-moz-us-west-2-dev):~$ terraform --version
Terraform v1.2.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.34.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.14.0
Your version of Terraform is out of date! The latest version
is 1.3.2. You can update by downloading from https://www.terraform.io/downloads.html
The workaround with ignore_changes = all
still works.
mine is:
resource "kubernetes_secret" "kubernetes_dashboard_csrf" {
metadata {
name = "kubernetes-dashboard-csrf"
namespace = var.namespace
labels = {
k8s-app = "kubernetes-dashboard"
}
}
lifecycle {
ignore_changes = all
}
type = "Opaque"
}
i ALWAYS have:
Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.kubernetes-dashboard.kubernetes_secret.kubernetes_dashboard_csrf to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/kubernetes" produced an invalid new
│ value for .data: inconsistent values for sensitive attribute.
ignore_changes is not working for me, even when i set it to ["data"]
I am also hitting this bug, ignoring all fixed this issue for me but would like a permanent fix
lifecycle {
ignore_changes = all
}
mine is:
resource "kubernetes_secret" "kubernetes_dashboard_csrf" { metadata { name = "kubernetes-dashboard-csrf" namespace = var.namespace labels = { k8s-app = "kubernetes-dashboard" } } lifecycle { ignore_changes = all } type = "Opaque" }
i ALWAYS have:
Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.kubernetes-dashboard.kubernetes_secret.kubernetes_dashboard_csrf to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/kubernetes" produced an invalid new │ value for .data: inconsistent values for sensitive attribute.
ignore_changes is not working for me, even when i set it to ["data"]
I ran into the problem for the exact same reason. I switched it to a 'kubernetes_manifest' resource and set ignore_changes = all and I've stopped getting an error. Not ideal, of course, but one less thing for the user to bring up :)
resource "kubernetes_manifest" "k8s_dashboard_csrf" {
manifest = {
"apiVersion" = "v1"
"kind" = "Secret"
"metadata" = {
"labels" = {
"k8s-app" = "kubernetes-dashboard"
}
"name" = "kubernetes-dashboard-csrf"
"namespace" = "kubernetes-dashboard"
}
"type" = "Opaque"
}
lifecycle {
ignore_changes = all
}
}
terraform 1.4.6 hashicorp/kubernetes v2.20.0... hashicorp/helm v2.9.0... hashicorp/azurerm v3.55.0... hashicorp/azuread v2.38.0... hashicorp/random v3.5.1...
setting a secret from the file create in the previous GHA step is not working either also with the lifecyle option
I could reproduce it. I had the same error on Terraform v1.5.3
trying to do set secret from azure container registry:
resource "azurerm_container_registry" "acr" {
name = "acr${var.project_name}webapp${var.deploy_target}"
resource_group_name = azurerm_resource_group.webapp.name
location = azurerm_resource_group.webapp.location
sku = "Basic"
admin_enabled = true
}
provider "kubernetes" {
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_secret" "k8s_secret" {
metadata {
name = "acr-secret"
}
data = {
username = azurerm_container_registry.acr.admin_username
password = azurerm_container_registry.acr.admin_password
}
type = "kubernetes.io/basic-auth"
}
After investigation I figured out when cluster is warming up terraform throughs inconsistent plan, but when it's warm this never happens.
PS: I was automatically shutting dev cluster down during night to save costs
We are experiencing this issue with Terraform version 1.7.0.
Succeeded on the first run, fails on subsequent runs with no changes to the underlying data or to the TF code. The same module code run in parallel pulling the same value but pushing to different destinations produces no errors.
Error: Provider produced inconsistent final plan
When expanding the plan for
module.xxx.kubernetes_secret.yyy to include new
values learned so far during apply, provider
"registry.terraform.io/hashicorp/kubernetes" produced an invalid new value
for .data: inconsistent values for sensitive attribute.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
@emmaLP: I assume the last steps to recreate should be:
terraform apply test.plan
Otherwise, it is unclear what the -out=test.plan
does to inform the apply
.