terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
[BUG] Inconsistent arguments in the `rancher2_auth_config_okta`
Rancher Server Setup
- Rancher version: 2.8.2
- Installation option (Docker install/Helm Chart):
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc):
- helm chart version: v2.16.8-rancher2, k8s cluster: 1.27
- Proxy/Cert Details:
Information about the Cluster
- Kubernetes version: 1.27
- Cluster Type: local
User Information
N/A
Provider Information
- What is the version of the Rancher v2 Terraform Provider in use? v4.1.0
- What is the version of Terraform in use? v1.7.5
Describe the bug
I try to set management.cattle.io/auth-provider-cleanup='user-locked'
annotations to the rancher2_auth_config_okta
resource adding a safeguard for our Okta auth provider.
Terraform plan shows the diff and allows deploying changes. However, after the deployment the annotation is not changed for the resource in the cluster. If I run the terraform plan it will show the diff again.
Terraform resource:
resource "rancher2_auth_config_okta" "auth" {
count = var.create_rancher_integration ? 1 : 0
rancher_api_host = var.rancher_base_url
...
annotations = {
"management.cattle.io/auth-provider-cleanup" = "user-locked"
}
}
Terraform apply:
# module.rancher2_okta.rancher2_auth_config_okta.auth[0] will be updated in-place
~ resource "rancher2_auth_config_okta" "auth" {
~ annotations = {
~ "management.cattle.io/auth-provider-cleanup" = "unlocked" -> "user-locked"
}
id = "okta"
name = "okta"
# (13 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
The debug logs show tolerated error:
2024-07-08T14:33:14.298+0300 [DEBUG] provider.terraform-provider-rancher2_v4.1.0: 2024/07/08 14:33:14 [INFO] Creating Auth Config okta
2024-07-08T14:33:14.437+0300 [DEBUG] provider.terraform-provider-rancher2_v4.1.0: 2024/07/08 14:33:14 [INFO] Refreshing Auth Config okta
2024-07-08T14:33:14.539+0300 [WARN] Provider "provider[\"registry.terraform.io/rancher/rancher2\"]" produced an unexpected new value for module.rancher2_okta.rancher2_auth_config_okta.auth[0], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .annotations["management.cattle.io/auth-provider-cleanup"]: was cty.StringVal("user-locked"), but now cty.StringVal("unlocked")
module.rancher2_okta.rancher2_auth_config_okta.auth[0]: Modifications complete after 2s [id=okta]
To Reproduce
- add the
"management.cattle.io/auth-provider-cleanup" = "user-locked"
annotation to the "rancher2_auth_config_okta" resource - deploy changes
- check the terraform plan
Actual Result
- Terraform plan shows that it will update annotation again.
- The resource still has old annotation
kubectl get authconfig okta -oyaml | grep auth-provider-cleanup
management.cattle.io/auth-provider-cleanup: unlocked
Expected Result
- No diff in the terraform plan
- authconfig resource has the new annotation
kubectl get authconfig okta -oyaml | grep auth-provider-cleanup
management.cattle.io/auth-provider-cleanup: user-locked
Additional context
I see the same issue when I try to disable Okta auth config using enabled: false
resource attribute. The attribute value is not changed with the similar tolerated error in the logs for this attriubte.