Enable fine-grained cluster resource scoping for `validations.kong.konghq.com` webhook
Problem Statement
We deploy the kong helm chart twice into the same cluster:
- research (used by our team for our own purposes)
- test (used by our customers, to deploy the nonprod versions of their own APIs).
We also leverage custom plugins, and deploy them as (global) kongclusterplugins.
The validations.kong.konghq.com webhook rules are currently matching on kongclusterplugins.
Consider a scenario where we have a global custom plugin v1 deployed to research and test. Then we decide we want to create v2, which has a slightly different schema. If we try to deploy v2, the webhook deployed to the test environment does not yet know about the new version of the plugin and fails admission to the cluster. The chart currently support configuration of a namespaceSelector but this only applies to namespaced objects, and kongclusterplugins are not namespaced.
Solution proposal
One approach would be to update the objectSelector to include a matchLabels selector (configurable with values), so that labels could be used to select individual objects that are desired to be excluded from validation.
thanks @msmost for raising the issue. We are facing the same problem. We currently have 3 kong KIC deployed (different teams, use cases, security requirements and lifecycles).
Some indeed have different custom plugins. I actually just went ahead and reproduced exactly your custom plugin myheader for our kong-green https://docs.konghq.com/kubernetes-ingress-controller/latest/plugins/custom/ (kong-blue and kong-protected-apps don't know about this custom plugin).
You can see in the screenshot below (GCP logs) that the validation webhooks (one for each kong) return different status codes
kong-greenreturns200- whereas
kong-blueandkong-protected-appsboth return400
those responses code are obtained by running the last command of your guide, i.e.
echo '
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: my-custom-plugin
namespace: kong-green
config:
header_value: "my first plugin"
plugin: myheader
' | kubectl -n kong-green apply -f -
but kubectl fails with
Error from server: error when creating "kongPlugin.yaml": admission webhook "validations.kong.konghq.com" denied the request: plugin failed schema validation: schema violation (name: plugin 'myheader' not enabled; add it to the 'plugins' configuration property)
even though the validation webhook of kong-green did return 200.
Looking at your helm chart, there's indeed a namespaceSelector https://github.com/Kong/charts/blob/main/charts/kong/templates/admission-webhook.yaml#L131.
But that wouldn't work for us as we don't control what workload is running in which namespace, and most of the time, our 3 kong instances will route traffic to workloads in the same namespace.
The most practical solution seems to continue leveraging the annotation
annotations:
kubernetes.io/ingress.class: kong-green
just like for the rest of the resources.
Thank you for your help.
Hi @joran-fonjallaz and @msmost,
As I read this issue I believe adding objectSelector, matchPolicy and matchConditions to webhook config exposed in the chart would help you define those to match only the objects you want. Is that right?
@pmalek - yep! Thinking that will solve the issue.
I will just state, that just hit the same issue, with multiple helm deployments of kong in same namespace.
The plugin need to be enabled on both installation, to be allowed to be used.
kubernetes-ingress-controller:3.5.1