Support for secrets or environment variables in helm chart http-snippet
What do you want to happen?
I'm migrating our current deployment of ingress-nginx from a home built helm chart, to the official helm chart.
As part of our configuration, we have set up rate limiting, as described at https://www.nginx.com/blog/rate-limiting-nginx/.
We have a variety of backend locations, and would like to apply the same rate limiting to all of them, so we have configured that through the controller.config.http-snippet parameter in the helm chart, so that it shows up in the resulting http section of the rendered nginx.conf.
As part of our rate limiting configuration, we want to specify a special rate limit for a service that uses a specific API key in the X-API-KEY request header. We want to keep this api key secret, store in a kubernetes secret, and make available to nginx through an environment variable. Storing in a secret and making available through the extraEnvs helm chart value is already working great.
My question is, is there a way to reference environment variables or secrets within the http-snippet, so that the resulting nginx.conf includes the value? We would like to set up a directive like the following:
map $http_x_api_key $burst_limit_exception {
"{{ getenv API_KEY_GIVEN_TO_PARTNER }}" "_partner_";
default "";
}
If I use the example above (trying to use the getenv template function defined in /main/internal/ingress/controller/template/template.go), my resulting conf rendered to /etc/nginx/nginx.conf looks like below - it seems this section isn't run through the go template?
map $http_x_api_key $burst_limit_exception {
"{{ getenv API_KEY_GIVEN_TO_PARTNER }}" "_partner_";
default "";
}
I have also tried a few other things:
- removing outer quotes and using
{{ getenv API_KEY_GIVEN_TO_PARTNER | quote }}- this gave me an error
2022/05/20 17:56:23 [emerg] 1195#1195: unexpected "{" in /tmp/nginx/nginx-cfg1480397523:308. I think because this wasn't run through the go template.
- this gave me an error
- Tried to use
$to reference, like is supported for datadog configuration"$API_KEY_GIVEN_TO_PARTNER" "_partner_";renders as"$API_KEY_GIVEN_TO_PARTNER" "_partner_";"$(API_KEY_GIVEN_TO_PARTNER)" "_partner_";renders as"$(API_KEY_GIVEN_TO_PARTNER)" "_partner_";$API_KEY_GIVEN_TO_PARTNER "_partner_";renders as$API_KEY_GIVEN_TO_PARTNER "_partner_";
- Tried to use
set_by_luaas mentioned in this Environment variables in nginx config blog post- This failed with
nginx: [emerg] "set_by_lua" directive is not allowed here in /tmp/nginx/nginx-cfg2728541408:265
- This failed with
Is there currently another issue associated with this?
This is similar to
- https://github.com/kubernetes/ingress-nginx/issues/8448
- https://github.com/helm/helm/issues/2133 - in our old project, we used to create this config map directly, which allowed us to treat this api key as a value parameter. This wasn't idea (we kept the secret on our CI/CD server, but it still ended up in the helm chart in plain text).
- Looks like https://github.com/kubernetes/ingress-nginx/issues/2901 might also be a related - I think if I could add an additional template (without overriding the main nginx.conf template), I could use my environment variables there, and then I could use
includein the mainhttp-snippet.
@rmelick-vida: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Our legacy deployment method was passing this api secret to helm as a value, so it wasn't particularly secret - helm would store it in the plain text of the kubernetes configuration.
I have done the same as a temporary workaround. I'll share how I did it here in case anyone else finds it helpful.
- I defined the rate limiting configuration in a ConfigMap, which looks like the following (there's a lot more in this config than this single
mapdirective)
apiVersion: v1
kind: ConfigMap
metadata:
name: rate-limiting-config
namespace: {{ .Release.Namespace }}
data:
rate-limiting-conf: |
<snip>
map $http_x_api_key $burst_limit_exception {
"{{ .Values.secrets.api_key_given_to_partner }}" "_partner_";
default "";
}
<snip>
- I mounted this ConfigMap as a file by setting
extraVolumesandextraVolumeMounts
extraVolumes:
- name: rate-limiting-config
configMap:
name: rate-limiting-config
items:
- key: "rate-limiting-conf"
path: "rate-limiting.conf"
extraVolumeMounts:
- name: rate-limiting-config
mountPath: /etc/nginx/rate-limiting-conf
- I use the nginx
includedirective to include this file in thehttp-snippetsection
include /etc/nginx/rate-limiting-conf/rate-limiting.conf;
I think this method of using include could work very well for these type of secrets, if there was a way to mount additional templates that would be rendered through the Go template engine. I know that ingress-nginx already supports overriding the main template (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/), could it be extended to allow additional templates, perhaps everything that ended with .tmpl?
This would allow us to more easily extend the primary template using include, without having to maintain our own copy/fork of the entire template.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.