helm-openldap
helm-openldap copied to clipboard
Error: olcMirrorMode: value #0: <olcMirrorMode> database is not a shadow
Hello,
I am unable to get OpenLDAP running on a 1.21.1 Kubernetes cluster, I do get this error
2021-06-30T10:25:49.549204376+02:00 60dc2a8d @(#) $OpenLDAP: slapd 2.4.57+dfsg-1~bpo10+1 (Jan 30 2021 06:59:51) $
2021-06-30T10:25:49.549230976+02:00 Debian OpenLDAP Maintainers <[email protected]>
2021-06-30T10:25:49.598702423+02:00 60dc2a8d olcMirrorMode: value #0: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598734095+02:00 60dc2a8d config error processing olcDatabase={0}config,cn=config: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598738830+02:00 60dc2a8d slapd stopped.
2021-06-30T10:25:49.598743525+02:00 60dc2a8d connections_destroy: nothing to destroy.
I am using this helm install:
helm install openldap helm-openldap/openldap-stack-ha \
--namespace openldap \
--create-namespace \
--set replicaCount=1 \
--set replication.enabled=false \
--set image.tag=1.5.0 \
--set-string logLevel="trace" \
--set-string env.LDAP_ORGANISATION="Test LDAP" \
--set-string env.LDAP_DOMAIN="ldap.internal.xxxxxxx.com" \
--set-string env.LDAP_BACKEND="mdb" \
--set-string env.LDAP_TLS="true" \
--set-string env.LDAP_TLS_ENFORCE="false" \
--set-string env.LDAP_REMOVE_CONFIG_AFTER_SETUP="true" \
--set-string env.LDAP_ADMIN_PASSWORD="admin" \
--set-string env.LDAP_CONFIG_PASSWORD="config" \
--set-string env.LDAP_READONLY_USER="true" \
--set-string env.LDAP_READONLY_USER_USERNAME="readonly" \
--set-string env.LDAP_READONLY_USER_PASSWORD="password"
Any help would be appreciated. Thanks!
Hi , Sorry for the late answer.
i will try asap your value and reproduce the issue.
why are you not using the replication mechanism ? Your issue about database is not shadow seems to be related to replication configuration applied to non replicated database.
Hello @jp-gouin ,
Thanks, the reason is that I am running a very small local kubernetes single-node cluster.
Hi , i just tested your values and it works for me.
can you please run : kubectl get pvc -n openldap
-> If you have more than one pvc, it mean that you had a previous deployment and since the app is a statefullset it will reclaim old volumes so you won't start from scratch.
Hello @jp-gouin,
Thanks, I am testing this on a kind cluster locally. I can't check it, I deleted that cluster already, but it seems to be working on my new cluster. I am sure I deleted the pvc (and the pv).
With ldapsearch I do get Invalid Credentials the whole time, I have tried different options and values. Can you give me the correct option when you look at the helm command above?
I also tried the phpAdmin, but that one is also failing.
Here is how i query the ldap base in local , but you can change the -H with the ip of the loadbalancer or the nodeport and node ip depending of your exposition:
root@openldap-stack-ha-0:/# ldapsearch -x -H ldap://localhost -b "dc=example,dc=org" -D "cn=admin,dc=example,dc=org" -w Not@SecurePassw0rd
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=org> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.org
dn: dc=example,dc=org
objectClass: top
objectClass: dcObject
objectClass: organization
o: Example Inc.
dc: example
# myGroup, example.org
dn: cn=myGroup,dc=example,dc=org
cn: myGroup
gidNumber: 500
objectClass: posixGroup
objectClass: top
# Jean Dupond, example.org
dn: cn=Jean Dupond,dc=example,dc=org
cn: Jean Dupond
gidNumber: 500
givenName: Jean
homeDirectory: /home/users/jdupond
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
sn: Dupond
uid: jdupond
uidNumber: 1000
userPassword:: e01ENX1LT1VMaHpmQmhQVHE5azdhOVhmQ0d3PT0=
# search result
search: 2
result: 0 Success
# numResponses: 4
# numEntries: 3
@jp-gouin, thank you. One final question, when you look at my Helm command, I am trying to overrule the LDAP_ADMIN_PASSWORD and the LDAP_CONFIG_PASSWORD. However, when I look at the env values in the pod, I do see them twice. Is my config ok or is there a problem with the Helm chart?
Yes it’s because those 2 variables are defines in the values .yaml
# Default Passwords to use, stored as a secret.
# You can override these at install time with
# helm install openldap --set openldap.adminPassword=<passwd>,openldap.configPassword=<passwd>
adminPassword: Not@SecurePassw0rd
configPassword: Not@SecurePassw0rd
so you don’t need to configure them in the env
Section as they will be overridden by the ones defined in the values.
Hello @jp-gouin ,
Thanks, clear, I didn't see that. However, I do have still a problem:
This is the install:
helm install openldap helm-openldap/openldap-stack-ha \
--namespace openldap \
--create-namespace \
--set replicaCount=1 \
--set replication.enabled=false \
--set image.tag=1.5.0 \
--set test.enabled=false \
--set ltb-passwd.enabled=false \
--set phpldapadmin.ingress.enabled=false \
--set env.LDAP_ORGANISATION="Test LDAP" \
--set env.LDAP_DOMAIN="ldap.xxxxxxx.local" \
--set-string env.LDAP_READONLY_USER="true" \
--set env.LDAP_READONLY_USER_USERNAME="readonly" \
--set env.LDAP_READONLY_USER_PASSWORD="password" \
--set openldap.adminPassword="admin" \
--set openldap.configPassword="config"
The admin and config passwords are sometimes set as the values above, but most of the time as Not@SecurePassw0rd . Strange and I can't figure out why.
Next to that, when the values are set to my passwords, I still have to login in phpAdmin with the Not@SecurePassw0rd password with admin.
Could you take a look at this?
Thanks.
@ws-prive About login issues here: Command for install:
helm install openldap helm-openldap/openldap-stack-ha \
--namespace openldap \
... \
--set env.LDAP_DOMAIN="ldap.xxxxxxx.local" \
... \
--set adminPassword="adminPasswd" \
--set configPassword="configPasswd"
adminPassword
and configPassword
without prefix openldap.
here. This is according to original values.yaml.
LDAP Search example:
$ kubectl --namespace openldap exec -it openldap-openldap-stack-ha-0 -- /bin/bash
# ldapsearch -x -H ldap://localhost -b "dc=ld,dc=xxxxxxx,dc=local" -D "cn=admin,dc=ldap,dc=xxxxxxx,dc=local" -w "adminPasswd"
Shadowing database error is related to replication: not enought information here.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.