helm-charts
helm-charts copied to clipboard
[BUG][Opensearch] security config doesn't update when applying Helm chart
Describe the bug
Hello everyone, I've deployed Opensearch trying to modify config.yml
via Helm chart, to add openId connect.
Thing is, no matter how I do it, my conf doesn't update.
I tried :
-
Creating the secret on my own and add it to
securityConfig.configSecret
, it gets mounted into the pods but the conf doesn't update. -
Adding the config.yml via
securityConfig.config.data
(file below), it also gets mounted into the pods but doesn't update the confs.
One hour ago I finally found a way to update the confs : I have to launch /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cacert /usr/share/opensearch/config/certs/ca.crt -cert /usr/share/opensearch/config/certs/tls.crt -key /usr/share/opensearch/config/certs/tls.key -f /usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml -icl -nhnv -t config
in the master pod in order for the config.yml to be taken into account.
This is really unpractical for a Kubernetes deployment and I'm pretty sure there's a "normal" way to do it ?
To Reproduce Steps to reproduce the behavior:
- Add a custom config.yml via
securityConfig.configSecret
(or mount a secret manually and append its name tosecurityConfig.config.data
) - Helm upgrade
- The file gets mounted but isn't taken into account by security admin
Expected behavior The securityconfig gets updated according to the Helm chart
Chart Name Opensearch
Host/Environment (please complete the following information):
- Opensearch 2.1.0
- Opensearch Helm Chart 2.3.0
Additional context Here are te relevant parts of my values.yaml
opensearch.yml
opensearchHome: /usr/share/opensearch
config:
opensearch.yml: |
cluster.name: opensearch-cluster
network.host: 0.0.0.0
logger.org.opensearch.index.reindex: trace
logger.securityjwt.level: trace
plugins:
security:
cache:
ttl_minutes: 1440
nodes_dn:
- CN=opensearch-cluster-master-headless.logs.svc.cluster.local
ssl:
transport:
pemcert_filepath: certs/tls.crt
pemkey_filepath: certs/tls.key
pemtrustedcas_filepath: certs/ca.crt
enforce_hostname_verification: false
http:
enabled: true
pemcert_filepath: certs/tls.crt
pemkey_filepath: certs/tls.key
pemtrustedcas_filepath: certs/ca.crt
allow_default_init_securityindex: true
authcz:
admin_dn:
- CN=opensearch-cluster-master-headless.logs.svc.cluster.local
audit.type: internal_opensearch
enable_snapshot_restore_privilege: true
check_snapshot_restore_write_privileges: true
restapi:
roles_enabled: ["all_access", "security_rest_api_access"]
system_indices:
enabled: true
indices:
[
".opendistro-alerting-config",
".opendistro-alerting-alert*",
".opendistro-anomaly-results*",
".opendistro-anomaly-detector*",
".opendistro-anomaly-checkpoints",
".opendistro-anomaly-detection-state",
".opendistro-reports-*",
".opendistro-notifications-*",
".opendistro-notebooks",
".opendistro-asynchronous-search-response*",
]
config.yml
securityConfig:
enabled: true
path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
actionGroupsSecret:
configSecret:
internalUsersSecret:
rolesSecret:
rolesMappingSecret:
tenantsSecret:
config:
securityConfigSecret: ""
dataComplete: true
data:
config.yml: |
_meta:
type: "config"
config_version: 2
config:
dynamic:
http:
anonymous_auth_enabled: false
xff:
enabled: false
internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
authc:
basic_internal_auth_domain:
description: "Authenticate via HTTP Basic against internal users database"
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: basic
challenge: false
authentication_backend:
type: internal
openid_auth_domain:
description: "OpenID"
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
subject_key: preferred_username
roles_key: roles
openid_connect_url: "https://<gitlab_url>/.well-known/openid-configuration"
authentication_backend:
type: noop
env
extraEnvs:
- name: DISABLE_INSTALL_DEMO_CONFIG
value: "true"
Adding @DandyDeveloper @TheAlgo can you provide your feedback.
This behaviour is documented: https://opensearch.org/docs/latest/security-plugin/configuration/security-admin/#a-word-of-caution
It's come up a few times, maybe needs to be added to the FAQ.
I think this should be possible, setup a job every-time when identified a change in secret that can run securityadmin.sh
.
This is a property of OpenSearch and nothing much to do with helm charts. A sidecar for any kind of job/hook will help here.
Also @smlx thanks for the suggestion, I guess we should have a FAQ section in the chart README. This will be of great help for the users.
I'm not sure I understand. We are supposed to run a script manually after deployment to configure the security config properly?
We are supposed to run a script manually after deployment to configure the security config properly?
No. The first time that you install the chart it will automatically bootstrap the securityconfig using the config files (e.g. the config you have added to the chart).
It is only if you subsequently change the securityconfig after the cluster has bootstrapped that changes in security config are not automatically applied. The link I posted above explains the reasons behind this behaviour. In short: auto-applying security config changes causes data loss.
I'm not sure I understand. We are supposed to run a script manually after deployment to configure the security config properly?
Yes exactly. Not sure why @smlx is saying no, the only exception being the first deployment. It is documented indeed; When I first read this section of the docs I instantly thought it was only for local/docker-compose environnements as it's usually the case for that kind of stuff, I guess that's my bad. As my original post says though, it's a really unpractical way to update a configuration inside a Pod.
Is there anything special that we need to set in the values.yaml in order to apply the custom securityConfig at bootstrap? I've been trying to reinstall the helm chart but every time I reinstall it I need to execute the securityAdmin.sh
script in order to apply the configuration defined under securityConfig
in the helm values.
Is there anything special that we need to set in the values.yaml in order to apply the custom securityConfig at bootstrap? I've been trying to reinstall the helm chart but every time I reinstall it I need to execute the
securityAdmin.sh
script in order to apply the configuration defined undersecurityConfig
in the helm values.
I know your comment is 19 days old and you've probably solved it, but for others this fixed it for me:
In values.yaml I had to :
- set securityConfig.path to /usr/share/opensearch/config/opensearch-security
- Comment out securityConfigSecret: ""
I did 1) above because I noticed in the logs thats where it was loading it from, I suspect its just a new default location in the newest Opensearch images.
I did 2) just incase, probably not needed but I didn't go back and reinstall to check.
@thanksmate1985 hello there, can you actually confirm that 1) and 2) are enough ? I'm positive it isn't, I tried changing the path too.
As stated earlier by someone else it's written in the docs that you have to launch security config manually in order to apply confs.
Also, if it ever works, be wary that you may overwrite configuration changes made via opensearch dashboards (new user created etc). You would have to get the actual config in place and append things to it, otherwise you would overwrite it.
Hi Jonathan,
Just to confirm, I'm only talking about Pigueira's question on how to do it via bootstrap only?
I'm new to Kubernetes so please correct me if I've done something wrong.
Initially when I saw your post, I admit i did run: /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/esnode.pem -key /usr/share/opensearch/config/esnode-key.pem -cd /usr/share/opensearch/config/opensearch-security/
But then saw others comments saying it should run on the first try, i reviewed the startup logs and some files in the container and found my configs were in the wrong directory, so I corrected the path after uninstalling/reinstalling it worked so I figured that was it and let Piguerias know.
I just now uninstalled via helm and deleted the 3 PVC's, reinstalled and it still worked. My security config change was connecting it up to keycloak via openid. I updated opensearch-dashboards to use openid as well. I've tested it only via dashboards (not via curl or anything), it redirects me to keycloak, I logged in, then I see my keycloak username and roles in os dashboards.
Is your path exactly as in my previous post? Is OS version 2.3.0?
@thanksmate1985 that's my bad then ! It actually works at bootstrap, but it's the only time it does. You are right
We noticed this too. Not only does the security config not apply on subsequent helm updates, but it forces all the pods to restart since it changes the chksum here:
https://github.com/opensearch-project/helm-charts/blob/d5c349318250b5371451200a6e5919b7cd5535e1/charts/opensearch/templates/statefulset.yaml#L68
One idea was to remove the chksum change and not even bother trying to update the secret mount in the pods, but instead run a helm post-upgrade
"job" hook to configure the security every time you apply the helm chart: https://helm.sh/docs/topics/charts_hooks/#the-available-hooks
It could be a job you enable with the understanding that it could lose your settings you applied out of band. . .
Thank you @Jonathan-w6d for pointing out the solution. Restarting the pods does nothing. I had to run :
kubectl exec -it opensearch-cluster-master-0 -- /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
-cd /usr/share/opensearch/config/opensearch-security/ \
-icl -nhnv \
-cacert /usr/share/opensearch/config/certs/ca.pem \
-cert /usr/share/opensearch/config/certs/os01.pem \
-key /usr/share/opensearch/config/certs/os01.key \
-t config.yml \
-t roles.yml \
-t roles_mapping.yml \
-t internal_users.yml \
-t action_groups.yml \
-t nodes_dn.yml \
-t whitelist.yml \
-t allowlist.yml \
-t audit.yml \
-t tenants.yml