Unable to run multiple bitnami openldap containers with common shared volume
Name and Version
bitnami/openldap2.6
What architecture are you using?
amd64
What steps will reproduce the bug?
- Add custom ldif files under the
/ldifsdirectory and create another container image namedlocalhost:32000/custom-openldap - create a common directory that will be mounted to all the ldap containers (
/root/openldap) - Create multiple container images which are mounted to the same directory (
/root/openldap) using the following command
docker run -d -e BITNAMI_DEBUG="true" -e LDAP_ADMIN_USERNAME="superuser" -e LDAP_BINDDN="cn=ldap_bind_user,ou=people,dc=example,dc=com" -e LDAP_ENABLE_TLS="no" -e LDAP_EXTRA_SCHEMAS="cosine,general-acl,my-permissions,my-roles,ppolicy,nis,inetorgperson" -e LDAP_ROOT="dc=example,dc=com" -e LDAP_SKIP_DEFAULT_TREE="yes" -e LDAP_URI="ldap://ldap-server-service.my-namespace.svc.cluster.local" -e USER_DESCRIPTION_MAX_LEN="100" -e USER_FIRST_AND_LAST_NAME_MAX_LEN="100" -e USER_NAME_MAX_LEN="100" -e LDAP_ADMIN_PASSWORD="admin123" -e LDAP_READONLY_USER_PASSWORD="admin123" -e proxyBindPassword="" -v /root/openldap:/bitnami/openldap localhost:32000/custom-openldap
- List container images using the
docker pscommand
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2 minutes ago Up 2 minutes 1389/tcp, 1636/tcp upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17 minutes ago Up 17 minutes 1389/tcp, 1636/tcp nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23 minutes ago Up 23 minutes 1389/tcp, 1636/tcp objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26 minutes ago Up 26 minutes 1389/tcp, 1636/tcp angry_margulis
- Execute the following ldapsearch command in all the containers
ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123
What is the expected behavior?
The expected behaviour is that ldapsearch should work on all the pods correctly
What do you see instead?
Ldapsearch is working on one container image whereas on other container images, we see the following error
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
I wanted to know whether it is feasible/possible to use the same mount point for multiple bitnami containers.
Additional information
Following is the list of container images
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2 minutes ago Up 2 minutes 1389/tcp, 1636/tcp upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17 minutes ago Up 17 minutes 1389/tcp, 1636/tcp nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23 minutes ago Up 23 minutes 1389/tcp, 1636/tcp objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26 minutes ago Up 26 minutes 1389/tcp, 1636/tcp angry_margulis
And following is the ldapsearch output on all the containers:
- f77ef5455f5f
$ docker exec -it f77ef5455f5f bash
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
- 9cccd41f02d2
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
- 5434761c9281
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123 # extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example
# groups, example.com
dn: ou=groups,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: groups
.
.
.
- ca40ef1a68a2 (Somehow LDAP bind failed on this container, there seems to be some environmental issue)
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D "cn=superuser,dc=example,dc=com" -w admin123
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
Hi, the issue may not be directly related to the Bitnami container image/Helm chart, but rather to how the application is being utilized, configured in your specific environment, or tied to a particular scenario that is not easy to reproduce on our side.
If you think that's not the case and want to contribute a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.
With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.
Hi @carrodher - Can u please point me to the openldap doc link, which will help me to create the clustering with the common shared volume ?
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.