terraform-provider-rancher2 icon indicating copy to clipboard operation
terraform-provider-rancher2 copied to clipboard

No support for `admission_configuration`

Open jocelynthode opened this issue 3 years ago • 4 comments

Hi,

The plugin currently expects a string for plugins but it receives a []interface {}

│ Error: rke_config.0.services.0.kube_api.0.admission_configuration.plugins: '' expected type 'string', got unconvertible type '[]interface {}', value: '[map[configuration:map[apiVersion:pod-security.admission.config.k8s.io/v1alpha1 defaults:map[audit:restricted audit-version:latest enforce:restricted enforce-version:latest warn:restricted warn-version:latest] exemptions:map[namespaces:[] runtimeClasses:[] usernames:[]] kind:PodSecurityConfiguration] name:PodSecurity path:] map[configuration:map[apiVersion:eventratelimit.admission.k8s.io/v1alpha1 kind:Configuration limits:[map[burst:20000 qps:5000 type:Server]]] name:EventRateLimit path:]]'

Here is the yaml that generates this error:

admissionConfiguration:
  apiVersion: apiserver.config.k8s.io/v1
  kind: AdmissionConfiguration
  plugins:
  - configuration:
      apiVersion: pod-security.admission.config.k8s.io/v1alpha1
      defaults:
        audit: restricted
        audit-version: latest
        enforce: restricted
        enforce-version: latest
        warn: restricted
        warn-version: latest
      exemptions:
        namespaces: []
        runtimeClasses: []
        usernames: []
      kind: PodSecurityConfiguration
    name: PodSecurity
    path: ""
  - configuration:
      apiVersion: eventratelimit.admission.k8s.io/v1alpha1
      kind: Configuration
      limits:
      - burst: 20000
        qps: 5000
        type: Server
    name: EventRateLimit
    path: ""

The issue seems to come from here: https://github.com/rancher/terraform-provider-rancher2/blob/master/rancher2/schema_cluster_rke_config_services_kube_api.go#L286

as TypeMap only allows String as value. Is there a workaround for this? Otherwise I'd be glad to submit a PR for this fix if I could be pointed in the right direction to start fixing this.

jocelynthode avatar Apr 22 '22 14:04 jocelynthode

Test plan

The author of PR https://github.com/rancher/terraform-provider-rancher2/pull/909 has been using the tf rancher2 changes in production for a few months without any issue. He's been testing on an rke cluster with kubernetes 1.23.6 so I would test this on a k8s 1.24 cluster to make sure that works. If QA has low bandwidth this issue can be closed.

Steps

  • [ ] Provision an RKE cluster with kubernetes 1.24 on a rancher v2.6-head using the tf rancher provider. Example of a main.tf, cluster.yml and admission configuration is here.
  • [ ] Verify the admission configuration fields are viewable in the cluster yaml
  • [ ] Verify the cluster is healthy

a-blender avatar Sep 19 '22 20:09 a-blender

@annablender Hey Anna, do you have an ETA on when this feature will make it to a new version?

jocelynthode avatar Oct 20 '22 11:10 jocelynthode

@jocelynthode Hello! The tf rancher2 provider v1.24.2 will be released tomorrow or early next week, 'out of band' from rancher but closest to rancher 2.6.9.

a-blender avatar Oct 20 '22 15:10 a-blender

Great thanks for the feedback :)

jocelynthode avatar Oct 20 '22 15:10 jocelynthode

@jocelynthode unfortunately we need to revert the changes made in your PR as the schema changes have prevented other users from upgrading their provider version. Apologies for this, this was an oversight on my part. We feel it is a better course of action to reestablish the old format of the field than require existing users to migrate their terraform configurations.

HarrisonWAffel avatar Nov 08 '22 17:11 HarrisonWAffel

@HarrisonWAffel That's an unfortunate turn of events. Is there a way to get this implemented in another way? This is really blocking on our side and I feel like the provider should give us the possibility to configure AdmissionConfiguration anyway.

jocelynthode avatar Nov 09 '22 06:11 jocelynthode

There currently is a way to configure admission configurations, it's just that the field is not a structured resource and is just a simple map. I thought that in #909 you noted that it was possible to use the field, but required casting to do so?

HarrisonWAffel avatar Nov 09 '22 16:11 HarrisonWAffel

Pardon me it's been a few months .I think I was referring to the fact that the k8s golang package this making me do a lot of typecast in my pr.

jocelynthode avatar Nov 09 '22 17:11 jocelynthode

Hi,

we actually get the same errors as soon as we enable admission controller in rancher gui. We use 1.25.9 k8s Clusters and the rancher2 provider version 3.0.1 with the rancher2_cluster resource.

The "admission_configuration" map is a known key to the provider as stated by the docs: https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster#admission_configuration But no matter if I define it or not, I get the error on terraform apply.

Is there any plan when this should be fixed? EOL of 1.24 is drawing nearer and nearer...

Error: rke_config.0.services.0.kube_api.0.admission_configuration.plugins: '' expected type 'string', got unconvertible type '[]interface {}', value: '[map[configuration:map[apiVersion:pod-security.admission.config.k8s.io/v1 defaults:map[audit:restricted audit-version:latest enforce:restricted enforce-version:latest warn:restricted warn-version:latest] exemptions:map[namespaces:[ingress-nginx kube-system cattle-system cattle-epinio-system cattle-fleet-system longhorn-system cattle-neuvector-system cattle-monitoring-system rancher-alerting-drivers cis-operator-system cattle-csp-adapter-system cattle-externalip-system cattle-gatekeeper-system istio-system cattle-istio-system cattle-logging-system cattle-windows-gmsa-system cattle-sriov-system cattle-ui-plugin-system tigera-operator]] kind:PodSecurityConfiguration] name:PodSecurity path:]]'

And btw. how should the admission_configuration be configured? It has to be a map of string: string... but having a look at what Rancher actually configures, the "plugins" keyword is a list of maps ... confused ...

#/rancher_kubernetes_engine_config/services/kube-api/admission_configuration
api_version: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1
    defaults:
      audit: restricted
      audit-version: latest
      enforce: restricted
      enforce-version: latest
      warn: restricted
      warn-version: latest
    exemptions:
      namespaces:
      - ingress-nginx
      - kube-system
      - cattle-system
      - cattle-epinio-system
      - cattle-fleet-system
      - longhorn-system
      - cattle-neuvector-system
      - cattle-monitoring-system
      - rancher-alerting-drivers
      - cis-operator-system
      - cattle-csp-adapter-system
      - cattle-externalip-system
      - cattle-gatekeeper-system
      - istio-system
      - cattle-istio-system
      - cattle-logging-system
      - cattle-windows-gmsa-system
      - cattle-sriov-system
      - cattle-ui-plugin-system
      - tigera-operator
    kind: PodSecurityConfiguration
  name: PodSecurity
  path: ''

Heiko-san avatar Jun 16 '23 14:06 Heiko-san

@Heiko-san: Yeah never found a way around it and we're still using a fork of the original plugin built on my branch just for that...

jocelynthode avatar Jun 16 '23 17:06 jocelynthode

@Heiko-san: Yeah never found a way around it and we're still using a fork of the original plugin built on my branch just for that...

@jocelynthode thank you. Is this fork available somewhere? Or could you provider a patch file or something?

Heiko-san avatar Jun 19 '23 09:06 Heiko-san

@Heiko-san Still using the branch from the original PR: https://github.com/jocelynthode/terraform-provider-rancher2/tree/add-admission-configuration

We haven't updated it to keep it in sync with the current dev.

See original PR here: https://github.com/rancher/terraform-provider-rancher2/pull/909

jocelynthode avatar Jun 19 '23 10:06 jocelynthode

Hi @jocelynthode , thanks again!

I wanted to let you know: We took the original Repo and just cherry-picked your 2 commits on top of HEAD again and it still seems to work and doesn't produce any merge conflicts or something.

git cherry-pick 7b682c4897aba2e7092d910f2b132da10c3dca74
git cherry-pick c6c9b30c121445ba2a08dc6bb4e20f105f44f8c2

Heiko-san avatar Jun 26 '23 09:06 Heiko-san

But how do you actually configure the admission controller with it?

If I manually enable the restricted Policy in a cluster, the diff of the cluster.yaml reads:

-> /default_pod_security_admission_configuration_template_name (only present on source2)
rancher-restricted

-> /rancher_kubernetes_engine_config/services/kube-api/admission_configuration (only present on source2)
api_version: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1
    defaults:
      audit: restricted
      audit-version: latest
      enforce: restricted
      enforce-version: latest
      warn: restricted
      warn-version: latest
    exemptions:
      namespaces:
      - ingress-nginx
      - kube-system
      - cattle-system
      - cattle-epinio-system
      - cattle-fleet-system
      - longhorn-system
      - cattle-neuvector-system
      - cattle-monitoring-system
      - rancher-alerting-drivers
      - cis-operator-system
      - cattle-csp-adapter-system
      - cattle-externalip-system
      - cattle-gatekeeper-system
      - istio-system
      - cattle-istio-system
      - cattle-logging-system
      - cattle-windows-gmsa-system
      - cattle-sriov-system
      - cattle-ui-plugin-system
      - tigera-operator
    kind: PodSecurityConfiguration
  name: PodSecurity
  path: ''

If I do the same with your patches, the diff reads:

-> /rancher_kubernetes_engine_config/services/kube-api/admission_configuration (only present on source2)
api_version: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- configuration:
  apiVersion: pod-security.admission.config.k8s.io/v1
  defaults:
    audit: restricted
    audit-version: latest
    enforce: restricted
    enforce-version: latest
    warn: restricted
    warn-version: latest
  exemptions:
    namespaces:
    - ingress-nginx
    - kube-system
    - cattle-system
    - cattle-epinio-system
    - cattle-fleet-system
    - longhorn-system
    - cattle-neuvector-system
    - cattle-monitoring-system
    - rancher-alerting-drivers
    - cis-operator-system
    - cattle-csp-adapter-system
    - cattle-externalip-system
    - cattle-gatekeeper-system
    - istio-system
    - cattle-istio-system
    - cattle-logging-system
    - cattle-windows-gmsa-system
    - cattle-sriov-system
    - cattle-ui-plugin-system
    - tigera-operator
  kind: PodSecurityConfiguration
name: PodSecurity
path: ''

So default_pod_security_admission_configuration_template_name is missing and thus if I edit the cluster, the gui says the chosen Policy is "none". However the provider doesn't seem to know something like default_pod_security_admission_configuration_template_name .

Heiko-san avatar Jun 26 '23 11:06 Heiko-san

Also if you manually enable the policy and then rollout terraform it wants to disable it again, which is correct:

    ~ rke_config {
            # (11 unchanged attributes hidden)

          ~ services {

              ~ kube_api {
                    # (6 unchanged attributes hidden)

                  - admission_configuration {
                      - api_version = "apiserver.config.k8s.io/v1" -> null
                      - kind        = "AdmissionConfiguration" -> null

                      - plugins {
                          - configuration = <<-EOT
                                apiVersion: pod-security.admission.config.k8s.io/v1
                                defaults:
                                  audit: restricted
                                  audit-version: latest
                                  enforce: restricted
                                  enforce-version: latest
                                  warn: restricted
                                  warn-version: latest
                                exemptions:
                                  namespaces:
                                  - ingress-nginx
                                  - kube-system
                                  - cattle-system
                                  - cattle-epinio-system
                                  - cattle-fleet-system
                                  - longhorn-system
                                  - cattle-neuvector-system
                                  - cattle-monitoring-system
                                  - rancher-alerting-drivers
                                  - cis-operator-system
                                  - cattle-csp-adapter-system
                                  - cattle-externalip-system
                                  - cattle-gatekeeper-system
                                  - istio-system
                                  - cattle-istio-system
                                  - cattle-logging-system
                                  - cattle-windows-gmsa-system
                                  - cattle-sriov-system
                                  - cattle-ui-plugin-system
                                  - tigera-operator
                                kind: PodSecurityConfiguration
                            EOT -> null
                          - name          = "PodSecurity" -> null
                        }
                    }

                    # (1 unchanged block hidden)
                }

                # (5 unchanged blocks hidden)
            }

            # (8 unchanged blocks hidden)
        }

But it will suffice to configure the following in the terraform code, so it says "no changes"... which obviously is NOT correct:

            kube_api {
                # ...
                admission_configuration { }
            }

So the diff is not calculated correctly.

However it actually helps better then a provider that dies with an exception , so still thanks a lot!

I think we will go with your patches + the following tf code and then configure the policies manually.

    lifecycle {
        ignore_changes = [
            rke_config.0.services.0.kube_api.0.admission_configuration,
        ]
    }

Heiko-san avatar Jun 26 '23 11:06 Heiko-san

Ok we now added the default_pod_security_admission_configuration_template_name field (not sure if it is correct in all places, but it seems to work as expected). Now we only configure this field and let rancher configure the admission_configuration while we use the ignore_changes approach from above in our tf code.

diff --git a/rancher2/data_source_rancher2_cluster.go b/rancher2/data_source_rancher2_cluster.go
index a2527bd3..30089c93 100644
--- a/rancher2/data_source_rancher2_cluster.go
+++ b/rancher2/data_source_rancher2_cluster.go
@@ -180,6 +180,11 @@ func dataSourceRancher2Cluster() *schema.Resource {
 				Computed:    true,
 				Description: "Cluster template revision ID",
 			},
+			"default_pod_security_admission_configuration_template_name": {
+				Type:        schema.TypeString,
+				Computed:    true,
+				Description: "Default pod security admission template",
+			},
 			"default_pod_security_policy_template_id": {
 				Type:        schema.TypeString,
 				Computed:    true,
diff --git a/rancher2/resource_rancher2_cluster.go b/rancher2/resource_rancher2_cluster.go
index 10fee44d..314f8e16 100644
--- a/rancher2/resource_rancher2_cluster.go
+++ b/rancher2/resource_rancher2_cluster.go
@@ -306,17 +306,18 @@ func resourceRancher2ClusterUpdate(d *schema.ResourceData, meta interface{}) err
 		"fleetAgentDeploymentCustomization":   fleetAgentDeploymentCustomization,
 		"description":                         d.Get("description").(string),
 		"defaultPodSecurityPolicyTemplateId":  d.Get("default_pod_security_policy_template_id").(string),
-		"desiredAgentImage":                   d.Get("desired_agent_image").(string),
-		"desiredAuthImage":                    d.Get("desired_auth_image").(string),
-		"dockerRootDir":                       d.Get("docker_root_dir").(string),
-		"fleetWorkspaceName":                  d.Get("fleet_workspace_name").(string),
-		"enableClusterAlerting":               d.Get("enable_cluster_alerting").(bool),
-		"enableClusterMonitoring":             d.Get("enable_cluster_monitoring").(bool),
-		"enableNetworkPolicy":                 &enableNetworkPolicy,
-		"istioEnabled":                        d.Get("enable_cluster_istio").(bool),
-		"localClusterAuthEndpoint":            expandClusterAuthEndpoint(d.Get("cluster_auth_endpoint").([]interface{})),
-		"annotations":                         toMapString(d.Get("annotations").(map[string]interface{})),
-		"labels":                              toMapString(d.Get("labels").(map[string]interface{})),
+		"defaultPodSecurityAdmissionConfigurationTemplateName": d.Get("default_pod_security_admission_configuration_template_name").(string),
+		"desiredAgentImage":        d.Get("desired_agent_image").(string),
+		"desiredAuthImage":         d.Get("desired_auth_image").(string),
+		"dockerRootDir":            d.Get("docker_root_dir").(string),
+		"fleetWorkspaceName":       d.Get("fleet_workspace_name").(string),
+		"enableClusterAlerting":    d.Get("enable_cluster_alerting").(bool),
+		"enableClusterMonitoring":  d.Get("enable_cluster_monitoring").(bool),
+		"enableNetworkPolicy":      &enableNetworkPolicy,
+		"istioEnabled":             d.Get("enable_cluster_istio").(bool),
+		"localClusterAuthEndpoint": expandClusterAuthEndpoint(d.Get("cluster_auth_endpoint").([]interface{})),
+		"annotations":              toMapString(d.Get("annotations").(map[string]interface{})),
+		"labels":                   toMapString(d.Get("labels").(map[string]interface{})),
 	}
 
 	// cluster_monitoring is not updated here. Setting old `enable_cluster_monitoring` value if it was updated
diff --git a/rancher2/schema_cluster.go b/rancher2/schema_cluster.go
index 734bb521..7b4f5132 100644
--- a/rancher2/schema_cluster.go
+++ b/rancher2/schema_cluster.go
@@ -228,6 +228,11 @@ func clusterDataFieldsV0() map[string]*schema.Schema {
 			Computed:    true,
 			Description: "Cluster template revision ID",
 		},
+		"default_pod_security_admission_configuration_template_name": {
+			Type:        schema.TypeString,
+			Computed:    true,
+			Description: "Default pod security admission template",
+		},
 		"default_pod_security_policy_template_id": {
 			Type:        schema.TypeString,
 			Computed:    true,
@@ -390,6 +395,11 @@ func clusterFieldsV0() map[string]*schema.Schema {
 			Optional:    true,
 			Description: "Cluster template revision ID",
 		},
+		"default_pod_security_admission_configuration_template_name": {
+			Type:        schema.TypeString,
+			Optional:    true,
+			Description: "Default pod security admission template",
+		},
 		"default_pod_security_policy_template_id": {
 			Type:        schema.TypeString,
 			Optional:    true,
@@ -663,6 +673,11 @@ func clusterFields() map[string]*schema.Schema {
 			Optional:    true,
 			Description: "Cluster template revision ID",
 		},
+		"default_pod_security_admission_configuration_template_name": {
+			Type:        schema.TypeString,
+			Optional:    true,
+			Description: "Default pod security admission template",
+		},
 		"default_pod_security_policy_template_id": {
 			Type:        schema.TypeString,
 			Optional:    true,
diff --git a/rancher2/structure_cluster.go b/rancher2/structure_cluster.go
index 1fcf84c4..812ba936 100644
--- a/rancher2/structure_cluster.go
+++ b/rancher2/structure_cluster.go
@@ -104,6 +104,8 @@ func flattenCluster(d *schema.ResourceData, in *Cluster, clusterRegToken *manage
 		d.Set("default_pod_security_policy_template_id", in.DefaultPodSecurityPolicyTemplateID)
 	}
 
+	d.Set("default_pod_security_admission_configuration_template_name", in.DefaultPodSecurityAdmissionConfigurationTemplateName)
+
 	if len(in.DesiredAgentImage) > 0 {
 		d.Set("desired_agent_image", in.DesiredAgentImage)
 	}
@@ -462,6 +464,10 @@ func expandCluster(in *schema.ResourceData) (*Cluster, error) {
 		obj.DefaultPodSecurityPolicyTemplateID = v
 	}
 
+	if v, ok := in.Get("default_pod_security_admission_configuration_template_name").(string); ok {
+		obj.DefaultPodSecurityAdmissionConfigurationTemplateName = v
+	}
+
 	if v, ok := in.Get("desired_agent_image").(string); ok && len(v) > 0 {
 		obj.DesiredAgentImage = v
 	}
diff --git a/rancher2/structure_cluster_test.go b/rancher2/structure_cluster_test.go
index fbd5a2b1..e24cf8b0 100644
--- a/rancher2/structure_cluster_test.go
+++ b/rancher2/structure_cluster_test.go
@@ -265,6 +265,7 @@ func testCluster() {
 	testClusterConfAKS.Description = "description"
 	testClusterConfAKS.Driver = clusterDriverAKS
 	testClusterConfAKS.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfAKS.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfAKS.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfAKS.EnableClusterMonitoring = true
 	testClusterConfAKS.EnableNetworkPolicy = newTrue()
@@ -277,13 +278,14 @@ func testCluster() {
 		"description":                "description",
 		"cluster_auth_endpoint":      testLocalClusterAuthEndpointInterface,
 		"cluster_registration_token": testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverAKS,
-		"aks_config":                              testClusterAKSConfigInterface,
-		"system_project_id":                       "system_project_id",
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverAKS,
+		"aks_config":                                                 testClusterAKSConfigInterface,
+		"system_project_id":                                          "system_project_id",
 	}
 	testClusterConfEKS = &Cluster{
 		AmazonElasticContainerServiceConfig: testClusterEKSConfigConf,
@@ -292,6 +294,7 @@ func testCluster() {
 	testClusterConfEKS.Description = "description"
 	testClusterConfEKS.Driver = clusterDriverEKS
 	testClusterConfEKS.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfEKS.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfEKS.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfEKS.EnableClusterMonitoring = true
 	testClusterConfEKS.EnableNetworkPolicy = newTrue()
@@ -304,13 +307,14 @@ func testCluster() {
 		"description":                "description",
 		"cluster_auth_endpoint":      testLocalClusterAuthEndpointInterface,
 		"cluster_registration_token": testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverEKS,
-		"eks_config":                              testClusterEKSConfigInterface,
-		"system_project_id":                       "system_project_id",
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverEKS,
+		"eks_config":                                                 testClusterEKSConfigInterface,
+		"system_project_id":                                          "system_project_id",
 	}
 	testClusterConfEKSV2 = &Cluster{}
 	testClusterConfEKSV2.EKSConfig = testClusterEKSConfigV2Conf
@@ -349,6 +353,7 @@ func testCluster() {
 	testClusterConfGKE.Description = "description"
 	testClusterConfGKE.Driver = clusterDriverGKE
 	testClusterConfGKE.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfGKE.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfGKE.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfGKE.EnableClusterMonitoring = true
 	testClusterConfGKE.EnableNetworkPolicy = newTrue()
@@ -361,13 +366,14 @@ func testCluster() {
 		"description":                "description",
 		"cluster_auth_endpoint":      testLocalClusterAuthEndpointInterface,
 		"cluster_registration_token": testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverGKE,
-		"gke_config":                              testClusterGKEConfigInterface,
-		"system_project_id":                       "system_project_id",
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverGKE,
+		"gke_config":                                                 testClusterGKEConfigInterface,
+		"system_project_id":                                          "system_project_id",
 	}
 	testClusterConfK3S = &Cluster{}
 	testClusterConfK3S.Name = "test"
@@ -375,6 +381,7 @@ func testCluster() {
 	testClusterConfK3S.K3sConfig = testClusterK3SConfigConf
 	testClusterConfK3S.Driver = clusterDriverK3S
 	testClusterConfK3S.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfK3S.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfK3S.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfK3S.EnableClusterMonitoring = true
 	testClusterConfK3S.EnableNetworkPolicy = newTrue()
@@ -402,6 +409,7 @@ func testCluster() {
 	testClusterConfGKEV2.Description = "description"
 	testClusterConfGKEV2.Driver = clusterDriverGKEV2
 	testClusterConfGKEV2.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfGKEV2.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfGKEV2.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfGKEV2.EnableClusterMonitoring = true
 	testClusterConfGKEV2.EnableNetworkPolicy = newTrue()
@@ -414,13 +422,14 @@ func testCluster() {
 		"description":                "description",
 		"cluster_auth_endpoint":      testLocalClusterAuthEndpointInterface,
 		"cluster_registration_token": testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverGKEV2,
-		"gke_config_v2":                           testClusterGKEConfigV2Interface,
-		"system_project_id":                       "system_project_id",
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverGKEV2,
+		"gke_config_v2":                                              testClusterGKEConfigV2Interface,
+		"system_project_id":                                          "system_project_id",
 	}
 	testClusterConfOKE = &Cluster{
 		OracleKubernetesEngineConfig: testClusterOKEConfigConf,
@@ -429,6 +438,7 @@ func testCluster() {
 	testClusterConfOKE.Description = "description"
 	testClusterConfOKE.Driver = clusterOKEKind
 	testClusterConfOKE.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfOKE.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfOKE.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfOKE.EnableClusterMonitoring = true
 	testClusterConfOKE.EnableNetworkPolicy = newTrue()
@@ -457,30 +467,32 @@ func testCluster() {
 	testClusterConfRKE.AgentEnvVars = testClusterEnvVarsConf
 	testClusterConfRKE.ClusterAgentDeploymentCustomization = testClusterAgentDeploymentCustomizationConf
 	testClusterConfRKE.FleetAgentDeploymentCustomization = testClusterAgentDeploymentCustomizationConf
+	testClusterConfRKE.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfRKE.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfRKE.FleetWorkspaceName = "fleet-test"
 	testClusterConfRKE.EnableClusterMonitoring = true
 	testClusterConfRKE.EnableNetworkPolicy = newTrue()
 	testClusterConfRKE.LocalClusterAuthEndpoint = testLocalClusterAuthEndpointConf
 	testClusterInterfaceRKE = map[string]interface{}{
-		"id":                                      "id",
-		"name":                                    "test",
-		"agent_env_vars":                          testClusterEnvVarsInterface,
-		"cluster_agent_deployment_customization":  testClusterAgentDeploymentCustomizationInterface,
-		"fleet_agent_deployment_customization":    testClusterAgentDeploymentCustomizationInterface,
-		"default_project_id":                      "default_project_id",
-		"description":                             "description",
-		"cluster_auth_endpoint":                   testLocalClusterAuthEndpointInterface,
-		"cluster_registration_token":              testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"fleet_workspace_name":                    "fleet-test",
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverRKE,
-		"rke_config":                              testClusterRKEConfigInterface,
-		"system_project_id":                       "system_project_id",
-		"windows_prefered_cluster":                false,
+		"id":                                     "id",
+		"name":                                   "test",
+		"agent_env_vars":                         testClusterEnvVarsInterface,
+		"cluster_agent_deployment_customization": testClusterAgentDeploymentCustomizationInterface,
+		"fleet_agent_deployment_customization":   testClusterAgentDeploymentCustomizationInterface,
+		"default_project_id":                     "default_project_id",
+		"description":                            "description",
+		"cluster_auth_endpoint":                  testLocalClusterAuthEndpointInterface,
+		"cluster_registration_token":             testClusterRegistrationTokenInterface,
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"fleet_workspace_name":                                       "fleet-test",
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverRKE,
+		"rke_config":                                                 testClusterRKEConfigInterface,
+		"system_project_id":                                          "system_project_id",
+		"windows_prefered_cluster":                                   false,
 	}
 	testClusterConfRKE2 = &Cluster{}
 	testClusterConfRKE2.Name = "test"
@@ -490,6 +502,7 @@ func testCluster() {
 	testClusterConfRKE2.AgentEnvVars = testClusterEnvVarsConf
 	testClusterConfRKE2.ClusterAgentDeploymentCustomization = testClusterAgentDeploymentCustomizationConf
 	testClusterConfRKE2.FleetAgentDeploymentCustomization = testClusterAgentDeploymentCustomizationConf
+	testClusterConfRKE2.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfRKE2.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfRKE2.EnableClusterMonitoring = true
 	testClusterConfRKE2.EnableNetworkPolicy = newTrue()
@@ -522,6 +535,7 @@ func testCluster() {
 	testClusterConfTemplate.ClusterTemplateRevisionID = "cluster_template_revision_id"
 	testClusterConfTemplate.Driver = clusterDriverRKE
 	testClusterConfTemplate.AgentEnvVars = testClusterEnvVarsConf
+	testClusterConfTemplate.DefaultPodSecurityAdmissionConfigurationTemplateName = "rancher-restricted"
 	testClusterConfTemplate.DefaultPodSecurityPolicyTemplateID = "restricted"
 	testClusterConfTemplate.EnableClusterAlerting = true
 	testClusterConfTemplate.EnableClusterMonitoring = true
@@ -535,19 +549,20 @@ func testCluster() {
 		"description":                "description",
 		"cluster_auth_endpoint":      testLocalClusterAuthEndpointInterface,
 		"cluster_registration_token": testClusterRegistrationTokenInterface,
-		"default_pod_security_policy_template_id": "restricted",
-		"enable_cluster_alerting":                 true,
-		"enable_cluster_monitoring":               true,
-		"enable_network_policy":                   true,
-		"kube_config":                             "kube_config",
-		"driver":                                  clusterDriverRKE,
-		"cluster_template_answers":                testClusterAnswersInterface,
-		"cluster_template_id":                     "cluster_template_id",
-		"cluster_template_questions":              testClusterQuestionsInterface,
-		"cluster_template_revision_id":            "cluster_template_revision_id",
-		"rke_config":                              []interface{}{},
-		"system_project_id":                       "system_project_id",
-		"windows_prefered_cluster":                false,
+		"default_pod_security_admission_configuration_template_name": "rancher-restricted",
+		"default_pod_security_policy_template_id":                    "restricted",
+		"enable_cluster_alerting":                                    true,
+		"enable_cluster_monitoring":                                  true,
+		"enable_network_policy":                                      true,
+		"kube_config":                                                "kube_config",
+		"driver":                                                     clusterDriverRKE,
+		"cluster_template_answers":                                   testClusterAnswersInterface,
+		"cluster_template_id":                                        "cluster_template_id",
+		"cluster_template_questions":                                 testClusterQuestionsInterface,
+		"cluster_template_revision_id":                               "cluster_template_revision_id",
+		"rke_config":                                                 []interface{}{},
+		"system_project_id":                                          "system_project_id",
+		"windows_prefered_cluster":                                   false,
 	}
 }

Heiko-san avatar Jun 26 '23 15:06 Heiko-san

This is fixed in PR https://github.com/rancher/terraform-provider-rancher2/pull/1119 which updates the admission_configuration schema from a Type.Map to a complex type, but with state migration logic to prevent errors on upgrade.

a-blender avatar Jul 19 '23 14:07 a-blender

QA Test Template

Testing with this test plan will verify that this issue is also resolved.

a-blender avatar Jul 19 '23 22:07 a-blender

@a-blender: Thanks for pointing this out!

After removing the cluster and reimporting it, I've tried to run terraform plan with the new plugin 3.1.0-rc5 and I get the following error:


Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: kind value Configuration should be: PodSecurityConfiguration
│ 
│   with rancher2_cluster.staging,
│   on cluster.tf line 1, in resource "rancher2_cluster" "staging":
│    1: resource "rancher2_cluster" "staging" {
│ 

Here is the associated terraform code:

admission_configuration {
  api_version = "apiserver.config.k8s.io/v1"
  kind        = "AdmissionConfiguration"
  plugins {
    name          = "PodSecurity"
    path          = ""
    configuration = file("../files/security.yml")
  }
  plugins {
    name          = "EventRateLimit"
    path          = ""
    configuration = file("../files/ratelimit.yml")
  }
}

and the attached security.yml:

apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
  enforce: restricted
  enforce-version: latest
  audit: restricted
  audit-version: latest
  warn: restricted
  warn-version: latest
exemptions:
  usernames: []
  runtimeClasses: []
  namespaces: []

I have no idea what would cause this error

jocelynthode avatar Jul 20 '23 07:07 jocelynthode

@a-blender: Thanks for pointing this out!

After removing the cluster and reimporting it, I've tried to run terraform plan with the new plugin 3.1.0-rc5 and I get the following error:


Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: kind value Configuration should be: PodSecurityConfiguration
│ 
│   with rancher2_cluster.staging,
│   on cluster.tf line 1, in resource "rancher2_cluster" "staging":
│    1: resource "rancher2_cluster" "staging" {
│ 

Here is the associated terraform code:

admission_configuration {
  api_version = "apiserver.config.k8s.io/v1"
  kind        = "AdmissionConfiguration"
  plugins {
    name          = "PodSecurity"
    path          = ""
    configuration = file("../files/security.yml")
  }
  plugins {
    name          = "EventRateLimit"
    path          = ""
    configuration = file("../files/ratelimit.yml")
  }
}

Hi @jocelynthode can you also show the contents of this "ratelimit.yml" file?

git-ival avatar Jul 20 '23 15:07 git-ival

Actually, looking at the original description for this issue there is an EventRateLimit configuration in the admission_configuration. If that is what is being used then the error could be caused by this line as it explicitly checks that the nested configuration's "kind" value is PodSecurityConfiguration. I had assumed the only Configurations that should be applied via admission_configuration were PodSecurityConfigurations, but it seems like this may not be the case.

@a-blender ~~Are you able to confirm/deny whether setting Configurations other than PodSecurityConfigurations are allowed/valid in the admission_configuration block?~~ Edit: It looks like other Configurations are valid here (https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) For reference here is the upstream documentation I was able to find:

  • https://kubernetes.io/docs/reference/config-api/apiserver-config.v1/
  • https://kubernetes.io/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/
  • https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/

@jocelynthode Out of curiosity, is there any reason you are applying the EventRateLimit in the admission_configuration block instead of the event_rate_limit block?

git-ival avatar Jul 20 '23 15:07 git-ival

@jocelynthode the error below you are seeing is related to user misconfigurations. This is not an issue w/ rancher2 provider. This feature (for RKE) has been validated w/ #1112, test results may be found here. Closing this out now 🚀

Error:

Planning failed. Terraform encountered an error while generating this plan.


Error: kind value Configuration should be: PodSecurityConfiguration
 
  with rancher2_cluster.staging,
  on cluster.tf line 1, in resource "rancher2_cluster" "staging":
   1: resource "rancher2_cluster" "staging" {

Josh-Diamond avatar Jul 20 '23 23:07 Josh-Diamond

I don't remember exactly why I had to have the EventRateLimit outside (Maybe because at the time we could not configure it in event_rate_limit. However, this is not the case anymore and was a misconfiguration on my part! Thanks for pointing it out @Josh-Diamond. It's now working perfectly fine with 3.1.0-rc5.

jocelynthode avatar Jul 24 '23 07:07 jocelynthode