terraform-provider-kubernetes-alpha
terraform-provider-kubernetes-alpha copied to clipboard
Error: Failed to update proposed state from prior state
Terraform, Provider, Kubernetes versions
Terraform version: 0.14.8
Provider version: 0.3.2
Kubernetes version: 1.17.17
Affected Resource(s)
kubernetes-manifest
Terraform Configuration Files
resource "kubernetes_manifest" "probe_servicemonitor" {
provider = kubernetes-alpha
manifest = {
apiVersion = "monitoring.coreos.com/v1"
kind = "ServiceMonitor"
metadata = {
name = "probe-service-monitor"
namespace = local.monitor_namespace
labels = {
release = "prometheus"
}
}
spec = {
selector = {
matchLabels = {
"app.kubernetes.io/name" = local.deployment_name
}
}
endpoints = [
for probe in local.probes:
{
port = "http"
honorLabels = true
path = "/probe"
scrapeTimeout = "20s"
interval = "30s"
relabelings = [
{
sourceLabels = ["__param_target"],
targetLabel = "instance"
},
{
sourceLabels = ["__address__"] # always found, i.e. always true, just to cause this "relabelling" to always happen
targetLabel = "service_group" # new label to be added
replacement = probe.service_group # static value assigned to the label
}
]
params = {
"module" = [
probe.probe_module
]
"target" = [
probe.url
]
}
}
]
}
}
}
local.probes looks like this:
probes = [
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
{
name = "example",
url = "https://example.com"
probe_module = "http_noauth"
service_group = "example"
},
]
Debug Output
https://gist.github.com/gmintoco/79be8ccffa4ec2322bc790384d94d17e
Steps to Reproduce
terraform applythis works successfully on the first attemptterraform planthis fails after the resources have been deployed
Expected Behavior
Plan to succeed
Actual Behavior
Plan failed with:
Error: Failed to update proposed state from prior state
Important Factoids
Nothing unusual that I can think of
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
@gmintoco I cannot reproduce this issue with the configuration and versions you mentioned. It worked as expected every time I tried it.
Are there any other particularities to your environment? Are you using any kind of remote state storage backend?
Here's how I tried to repro:
~/test-alpha-197 » terraform apply -auto-approve alex@Alexs-MBP
kubernetes_manifest.probe_servicemonitor: Creating...
kubernetes_manifest.probe_servicemonitor: Creation complete after 0s
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
-----------------------------------------------------------------------------------------------------------------------------
~/test-alpha-197 » terraform plan alex@Alexs-MBP
kubernetes_manifest.probe_servicemonitor: Refreshing state...
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
---------------------------------------------------------------------------------------------------------------------------
-/test-alpha-197 » terraform version alex@Alexs-MBP
Terraform v0.14.8
+ provider registry.terraform.io/hashicorp/kubernetes-alpha v0.3.2
Your version of Terraform is out of date! The latest version
is 0.15.1. You can update by downloading from https://www.terraform.io/downloads.html
-----------------------------------------------------------------------------------------------------------------------------
~/test-alpha-197 » kubectl version alex@Alexs-MBP
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.17", GitCommit:"f3abc15296f3a3f54e4ee42e830c61047b13895f", GitTreeState:"clean", BuildDate:"2021-01-13T13:13:00Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Hey @alexsomesan Sorry for the late reply.
I am using a GCS state bucket but I was also unable to reproduce this minimally (even with many combinations of removing and adding resources)
I added a test GCS bucket and tested again and was still unable to reproduce. I am seeing these errors still however (and I can also see other issues other users have created with similar errors as well).
To add some more context currently, I am seeing this when updating a resource (no for loops or anything special).
It is a Prometheus rule resource and I have added 2 rules to it. Deleting the resource from the cluster and then running plan again will allow it to continue (as a workaround).
Thanks a lot for your work on this provider (and especially the local planning aspect released recently) its great to be able to deploy CRDs using terraform :)
Actually scratch that @alexsomesan I was just able to reproduce this error.
Minimal Reproduction Steps (using GCS statefile storage):
- Deploy a resource
- Add a label or annotation after deployment to the resource in the cluster
- Modify the resource definition (add a new Prometheus rule for example)
- Attempt to plan and encounter an error
The adding of labels after deployment is not uncommon with operators (for example prometheus operator adds a "prometheus-operator-validated: 'true'" label to all rules that it identifies.
Let me know if you need any further details :)