terraform-provider-tanzu-mission-control
terraform-provider-tanzu-mission-control copied to clipboard
nodepool drift causes panic in tanzu kubernetes clusters
Describe the bug
if a nodepool is removed directly from TMC that is managed by terraform it will cause terraform plan,refresh, or apply to fail with a panic.
This seems to be due to the results of the state file including extra node pools that it can't iterate over here when iterating over the response provided by the TMC api.
error:
tanzu-mission-control_tanzu_kubernetes_cluster.tkgs_cluster: Refreshing state... [id=w4-hs3-nimbus-tanzutmm/w4-hs3-nimbus-tanzutmm/tf-validation]
╷
│ Error: Plugin did not respond
│
│ with tanzu-mission-control_tanzu_kubernetes_cluster.tkgs_cluster,
│ on main.tf line 38, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster":
│ 38: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-tanzu-mission-control_v1.4.5 plugin:
panic: runtime error: index out of range [3] with length 3
goroutine 98 [running]:
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.removeUnspecifiedNodePoolsOverrides({0x140004394c0?, 0x4, 0x1054f3a2d?}, 0x140009debd0)
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:402 +0x394
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.resourceTanzuKubernetesClusterRead({0x105bf74e0, 0x14000d8b350}, 0x14000d97d00, {0x105ade8c0?, 0x140006745a0})
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/resource_tanzu_kuberenetes_cluster.go:154 +0x458
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x105bf74e0?, {0x105bf74e0?, 0x14000d8b350?}, 0xd?, {0x105ade8c0?, 0x140006745a0?})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:719 +0x64
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0x14000693b20, {0x105bf74e0, 0x14000d8b350}, 0x14000991ba0, {0x105ade8c0, 0x140006745a0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:1015 +0x468
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0x14000836f60, {0x105bf7438?, 0x140009e4940?}, 0x140009e49c0)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:613 +0x400
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0x140003d2000, {0x105bf74e0?, 0x14000d8aba0?}, 0x14000444360)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:746 +0x3b4
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0x105af9180?, 0x140003d2000}, {0x105bf74e0, 0x14000d8aba0}, 0x140001e2af0, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:349 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x1400060c1e0, {0x105bfe378, 0x14000503d40}, 0x140005c10e0, 0x140008441b0, 0x1069beed0, 0x0)
google.golang.org/[email protected]/server.go:1335 +0xc64
google.golang.org/grpc.(*Server).handleStream(0x1400060c1e0, {0x105bfe378, 0x14000503d40}, 0x140005c10e0, 0x0)
google.golang.org/[email protected]/server.go:1712 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.1()
google.golang.org/[email protected]/server.go:947 +0xb4
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/[email protected]/server.go:958 +0x174
Error: The terraform-provider-tanzu-mission-control_v1.4.5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
Reproduction steps
- create a cluster with two nodepools
- deletea nodepool from TMC
- run
terraform plan
Expected behavior
The provider does not panic and shows the difference in state
Additional context
No response