containers
containers copied to clipboard
Error at docker run
The following command from README.md :
docker run --detach --rm --name test-openldap --network my-network --env LDAP_ADMIN_USERNAME=admin --env LDAP_ADMIN_PASSWORD=adminpassword --env LDAP_USERS=customuser --env LDAP_PASSWORDS=custompassword bitnami/openldap:latest
triggers this error :
# docker logs -f test-openldap
13:48:24.30 INFO ==> ** Starting LDAP setup **
13:48:24.36 INFO ==> Validating settings in LDAP_* env vars
13:48:24.37 INFO ==> Initializing OpenLDAP...
13:48:24.40 INFO ==> Creating LDAP online configuration
13:48:24.42 INFO ==> Starting OpenLDAP server in background
13:48:43.59 INFO ==> Configure LDAP credentials for admin user
13:48:45.05 INFO ==> Adding LDAP extra schemas
13:48:45.74 INFO ==> Creating LDAP default tree
13:48:46.70 INFO ==> ** LDAP setup finished! **
13:48:46.74 INFO ==> ** Starting slapd **
5ee2363e @(#) $OpenLDAP: slapd 2.4.50 (May 4 2020 16:17:50) $
@5fb3c780904c:/bitnami/blacksmith-sandox/openldap-2.4.50/servers/slapd
5ee23655 hdb_db_open: database "dc=example,dc=org": database already in use.
5ee23655 backend_startup_one (type=hdb, suffix="dc=example,dc=org"): bi_db_open failed! (-1)
5ee23655 slapd stopped.
Hi,
This is strange, I was unable to reproduce the issue
1 ❯ docker run --detach --rm --name openldap \
> --network my-network \
> --env LDAP_ADMIN_USERNAME=admin \
> --env LDAP_ADMIN_PASSWORD=adminpassword \
> --env LDAP_USERS=customuser \
> --env LDAP_PASSWORDS=custompassword \
> bitnami/openldap:latest
132c0895999e676a417299b011d8341c8ac293adf2979f2af26ac2060211364a
❲n❳ v12.16.3 /tmp 12:03:28 jsalmeron
❯ docker logs openldap
10:03:28.63 INFO ==> ** Starting LDAP setup **
10:03:28.65 INFO ==> Validating settings in LDAP_* env vars
10:03:28.66 INFO ==> Initializing OpenLDAP...
10:03:28.67 INFO ==> Creating LDAP online configuration
10:03:28.68 INFO ==> Starting OpenLDAP server in background
10:03:28.92 INFO ==> Configure LDAP credentials for admin user
10:03:28.93 INFO ==> Adding LDAP extra schemas
10:03:28.95 INFO ==> Creating LDAP default tree
10:03:28.98 INFO ==> ** LDAP setup finished! **
10:03:29.00 INFO ==> ** Starting slapd **
5ee74771 @(#) $OpenLDAP: slapd 2.4.50 (May 4 2020 16:17:50) $
@5fb3c780904c:/bitnami/blacksmith-sandox/openldap-2.4.50/servers/slapd
5ee74771 hdb_db_open: warning - no DB_CONFIG file found in directory /bitnami/openldap/data: (2).
Expect poor performance for suffix "dc=example,dc=org".
5ee74771 slapd starting
Could you provide more details about your Docker installation?
@javsalgar Just curious. Should I be worried about the no DB_CONFIG warning?
I'm not an OpenLDAP expert, but I checked with my colleagues and you shouldn't worry about it.
I'm seeing this behavior as well.
# docker run --rm --name test-openldap bitnami/openldap:2.4.56
21:50:43.48 INFO ==> ** Starting LDAP setup **
21:50:43.53 INFO ==> Validating settings in LDAP_* env vars
21:50:43.56 INFO ==> Initializing OpenLDAP...
21:50:43.59 INFO ==> Creating LDAP online configuration
21:50:43.64 INFO ==> Starting OpenLDAP server in background
21:50:44.85 INFO ==> Configure LDAP credentials for admin user
21:50:44.87 INFO ==> Adding LDAP extra schemas
21:50:44.91 INFO ==> Creating LDAP default tree
21:50:46.28 INFO ==> ** LDAP setup finished! **
21:50:46.31 INFO ==> ** Starting slapd **
5fecf636 @(#) $OpenLDAP: slapd 2.4.56 (Nov 11 2020 03:09:45) $
@3ccbe9810b42:/bitnami/blacksmith-sandox/openldap-2.4.56/servers/slapd
5fecf636 hdb_db_open: database "dc=example,dc=org": database already in use.
5fecf636 backend_startup_one (type=hdb, suffix="dc=example,dc=org"): bi_db_open failed! (-1)
5fecf636 slapd stopped.
docker -v
reports Docker version 19.03.12, build 48a66213fe
on a Centos 7.8.2003 machine. On an Ubuntu 18.04.5 machine with Docker version 19.03.6, build 369ce74a3c
, I do not have this issue.
Hi,
This is very strange, could it be because of some kernel restrictions in the Centos machine?
It seems like https://github.com/osixia/docker-openldap/issues/85 may be related, the centos machine I'm getting failures on has some sort of NFS involved.
Hi @sumidiot
By default "slapd" is started with log level "256", see:
- https://github.com/bitnami/bitnami-docker-openldap/blob/master/2/debian-10/rootfs/opt/bitnami/scripts/openldap/run.sh#L16
You can try to build your custom image using a higher level, e.g. "-1" (more info at https://www.openldap.org/doc/admin24/slapdconfig.html) so you obtain more information about the reason why the error appears.
I can copy the whole logs if it seems like it could be useful, but here's some of what seems like the relevant output with level "-1" (I see no glaring errors elsewhere in the log):
5ff4ebce slapd startup: initiated.
5ff4ebce backend_startup_one: starting "cn=config"
5ff4ebce config_back_db_open
Backend ACL: access to *
by * none
5ff4ebce config_back_db_open: line 0: warning: cannot assess the validity of the ACL scope within backend naming context
5ff4ebce backend_startup_one: starting "cn=Monitor"
5ff4ebce >>> dnNormalize: <cn=Monitor>
5ff4ebce <<< dnNormalize: <cn=monitor>
5ff4ebce >>> dnPretty: <cn=Backends>
=> ldap_bv2dn(cn=Backends,0)
<= ldap_bv2dn(cn=Backends)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(cn=Backends)=0
5ff4ebce <<< dnPretty: <cn=Backends>
...
5ff4ebce >>> dnNormalize: <cn=Database 1>
5ff4ebce <<< dnNormalize: <cn=database 1>
5ff4ebce >>> dnNormalize: <cn=Backend 2,cn=Backends,cn=Monitor>
=> ldap_bv2dn(cn=Backend 2,cn=Backends,cn=Monitor,0)
<= ldap_bv2dn(cn=Backend 2,cn=Backends,cn=Monitor)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(cn=backend 2,cn=backends,cn=monitor)=0
5ff4ebce <<< dnNormalize: <cn=backend 2,cn=backends,cn=monitor>
5ff4ebce >>> dnNormalize: <cn=Database 2>
5ff4ebce <<< dnNormalize: <cn=database 2>
5ff4ebce >>> dnNormalize: <cn=Backend 4,cn=Backends,cn=Monitor>
=> ldap_bv2dn(cn=Backend 4,cn=Backends,cn=Monitor,0)
<= ldap_bv2dn(cn=Backend 4,cn=Backends,cn=Monitor)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(cn=backend 4,cn=backends,cn=monitor)=0
5ff4ebce <<< dnNormalize: <cn=backend 4,cn=backends,cn=monitor>
5ff4ebce >>> dnNormalize: <cn=Listener 0>
5ff4ebce <<< dnNormalize: <cn=listener 0>
5ff4ebce >>> dnNormalize: <cn=Listener 1>
5ff4ebce <<< dnNormalize: <cn=listener 1>
5ff4ebce >>> dnNormalize: <cn=Listener 2>
5ff4ebce <<< dnNormalize: <cn=listener 2>
5ff4ebce >>> dnNormalize: <cn=Bind>
5ff4ebce <<< dnNormalize: <cn=bind>
5ff4ebce >>> dnNormalize: <cn=Unbind>
5ff4ebce <<< dnNormalize: <cn=unbind>
5ff4ebce >>> dnNormalize: <cn=Search>
5ff4ebce <<< dnNormalize: <cn=search>
5ff4ebce >>> dnNormalize: <cn=Compare>
5ff4ebce <<< dnNormalize: <cn=compare>
5ff4ebce >>> dnNormalize: <cn=Modify>
5ff4ebce <<< dnNormalize: <cn=modify>
5ff4ebce >>> dnNormalize: <cn=Modrdn>
5ff4ebce <<< dnNormalize: <cn=modrdn>
5ff4ebce >>> dnNormalize: <cn=Add>
5ff4ebce <<< dnNormalize: <cn=add>
5ff4ebce >>> dnNormalize: <cn=Delete>
5ff4ebce <<< dnNormalize: <cn=delete>
5ff4ebce >>> dnNormalize: <cn=Abandon>
5ff4ebce <<< dnNormalize: <cn=abandon>
5ff4ebce >>> dnNormalize: <cn=Extended>
5ff4ebce <<< dnNormalize: <cn=extended>
5ff4ebce >>> dnNormalize: <cn=Overlay 0>
5ff4ebce <<< dnNormalize: <cn=overlay 0>
5ff4ebce >>> dnNormalize: <cn=Bytes>
5ff4ebce <<< dnNormalize: <cn=bytes>
5ff4ebce >>> dnNormalize: <cn=PDU>
5ff4ebce <<< dnNormalize: <cn=pdu>
5ff4ebce >>> dnNormalize: <cn=Entries>
5ff4ebce <<< dnNormalize: <cn=entries>
5ff4ebce >>> dnNormalize: <cn=Referrals>
5ff4ebce <<< dnNormalize: <cn=referrals>
5ff4ebce >>> dnNormalize: <cn=Max>
5ff4ebce <<< dnNormalize: <cn=max>
5ff4ebce >>> dnNormalize: <cn=Max Pending>
5ff4ebce <<< dnNormalize: <cn=max pending>
5ff4ebce >>> dnNormalize: <cn=Open>
5ff4ebce <<< dnNormalize: <cn=open>
5ff4ebce >>> dnNormalize: <cn=Starting>
5ff4ebce <<< dnNormalize: <cn=starting>
5ff4ebce >>> dnNormalize: <cn=Active>
5ff4ebce <<< dnNormalize: <cn=active>
5ff4ebce >>> dnNormalize: <cn=Pending>
5ff4ebce <<< dnNormalize: <cn=pending>
5ff4ebce >>> dnNormalize: <cn=Backload>
5ff4ebce <<< dnNormalize: <cn=backload>
5ff4ebce >>> dnNormalize: <cn=State>
5ff4ebce <<< dnNormalize: <cn=state>
5ff4ebce >>> dnNormalize: <cn=Runqueue>
5ff4ebce <<< dnNormalize: <cn=runqueue>
5ff4ebce >>> dnNormalize: <cn=Tasklist>
5ff4ebce <<< dnNormalize: <cn=tasklist>
5ff4ebce >>> dnNormalize: <cn=Start>
5ff4ebce <<< dnNormalize: <cn=start>
5ff4ebce >>> dnNormalize: <cn=Current>
5ff4ebce <<< dnNormalize: <cn=current>
5ff4ebce >>> dnNormalize: <cn=Uptime>
5ff4ebce <<< dnNormalize: <cn=uptime>
5ff4ebce >>> dnNormalize: <cn=Read>
5ff4ebce <<< dnNormalize: <cn=read>
5ff4ebce >>> dnNormalize: <cn=Write>
5ff4ebce <<< dnNormalize: <cn=write>
5ff4ebce backend_startup_one: starting "dc=example,dc=org"
5ff4ebce hdb_db_open: "dc=example,dc=org"
5ff4ebce hdb_db_open: database "dc=example,dc=org": database already in use.
5ff4ebce backend_startup_one (type=hdb, suffix="dc=example,dc=org"): bi_db_open failed! (-1)
5ff4ebce slapd shutdown: initiated
5ff4ebce ====> bdb_cache_release_all
5ff4ebce slapd destroy: freeing system resources.
5ff4ebce slapd stopped.
To get here, I build and then run based on the following Dockerfile:
FROM bitnami/openldap:2.4.56
COPY run.sh /opt/bitnami/scripts/openldap/run.sh
where the run.sh
just has "-1"
instead of "256"
on the line linked in the previous comment.
Hi @sumidiot
Thanks so much for sharing the logs!! Unluckily, they don't seem to be very helpful... I was expecting more details about the reasons why these lines appear in the logs:
5ff4ebce hdb_db_open: database "dc=example,dc=org": database already in use.
5ff4ebce backend_startup_one (type=hdb, suffix="dc=example,dc=org"): bi_db_open failed! (-1)
It's hard to tell, but as you mentioned it's highly likely to be related with some issue/incompatibility with the filesystem.
I've got the same issue from time to time on the same compose with 2.4.58-debian-10-r18. I know that "solution" is really ugly, but it woks for me:) Patch entrypoint.sh with a sleep after LDAP setup:
if [[ "$*" = "/opt/bitnami/scripts/openldap/run.sh" ]]; then
info "** Starting LDAP setup **"
/opt/bitnami/scripts/openldap/setup.sh
info "** LDAP setup finished! **"
fi
+echo "Trying to prevent file access race"
+sleep 10
echo ""
exec "$@"
Hi @elarchenko
Thanks for sharing your solution! It may be worth trying to introduce a sleep in the ldap_stop
function to ensure the resources are properly liberated. I mean, adding it in the lines below:
- https://github.com/bitnami/bitnami-docker-openldap/blob/master/2/debian-10/rootfs/opt/bitnami/scripts/libopenldap.sh#L190
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Same issue there, any hope it gets fixed? It is quite unexpeted to have to patch the image to use it with Kubernetes.
Hi @rmannibucau
Did you try the workaround (adding some sleep before starting LDAP) suggested by other users on your K8s setup? Did it work?
I agree that we should definitely revisit what's going on in the ldap_stop
function and improve its resiliency.
Hi @juan131, not really since it is random to add a "sleep", moved to extend the image to make the filesystem more stable so I guess a retry mecanism with a max number of iterations can be a saner option (or a network check, not 100% sure if the issue is the shutdown or the filesystem since I was using volumes at some points).
Hi @rmannibucau
From what I investigated the issue is related with sldap taking too long to liberate the filesystem resources.
During the container initialization, there's a sldap is restarted. The ldap_stop
function stops sldap by killing the process (sending SIGKILL signal) and once the process is stopped, the function finish successfully. However, it seems that some filesystem resources keep "locked" a few seconds more, and when it tries to start again sldap the error below appears:
5ff4ebce hdb_db_open: database "dc=example,dc=org": database already in use.
We need to improve that ldap_stop
function so it waits or the process to be stopped and also ensure the filesystem resources are liberated.
I experienced this as well, and had good luck with this patch: https://github.com/bitnami/bitnami-docker-openldap/pull/35
Hi @mattmoyer
Thanks for the PR! I added some comments since I think we should approach the "lock" files issue in a different way.
Hi, has this been fixed? I'm trying to run this on a CentOS 7.9 and getting the same slapd running error when I run this command, Command:
docker run --name openldap \
-v /certs:/opt/bitnami/openldap/certs \
-v /openldap:/bitnami/openldap/ \
-e ALLOW_EMPTY_PASSWORD="no" \
-e LDAP_ENABLE_TLS="yes" \
-e LDAP_TLS_CERT_FILE="/opt/bitnami/openldap/certs/server-cert.pem" \
-e LDAP_TLS_KEY_FILE="/opt/bitnami/openldap/certs/server-key.pem" \
-e LDAP_TLS_CA_FILE="/opt/bitnami/openldap/certs/ca-cert.pem" \
-e BITNAMI_DEBUG="true" \
bitnami/openldap:latest
Error:
17:39:48.78 INFO ==> ** Starting LDAP setup **
17:39:48.82 INFO ==> Validating settings in LDAP_* env vars
17:39:48.83 INFO ==> Initializing OpenLDAP...
17:39:48.83 DEBUG ==> Ensuring expected directories/files exist...
17:39:48.84 INFO ==> Creating LDAP online configuration
17:39:48.87 INFO ==> Starting OpenLDAP server in background
17:39:48.91 INFO ==> Configure LDAP credentials for admin user
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={2}mdb,cn=config"
modifying entry "olcDatabase={2}mdb,cn=config"
modifying entry "olcDatabase={2}mdb,cn=config"
modifying entry "olcDatabase={1}monitor,cn=config"
17:39:48.92 INFO ==> Configuring TLS
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
17:39:48.93 INFO ==> Adding LDAP extra schemas
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=cosine,cn=schema,cn=config"
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=inetorgperson,cn=schema,cn=config"
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=nis,cn=schema,cn=config"
17:39:48.96 INFO ==> Creating LDAP default tree
adding new entry "dc=example,dc=org"
adding new entry "ou=users,dc=example,dc=org"
adding new entry "cn=user01,ou=users,dc=example,dc=org"
adding new entry "cn=user02,ou=users,dc=example,dc=org"
adding new entry "cn=readers,ou=users,dc=example,dc=org"
17:39:50.04 INFO ==> ** LDAP setup finished! **
17:39:50.06 INFO ==> ** Starting slapd **
62e026e6.04a022b6 0x7f2700eec740 @(#) $OpenLDAP: slapd 2.6.3 (Jul 15 2022 11:05:18) $
@100c8f9ee7cb:/bitnami/blacksmith-sandox/openldap-2.6.3/servers/slapd
62e026e6.055c39b0 0x7f2700eec740 slapd starting
It just stays there and hangs. Any suggestions?
We are going to transfer this issue to bitnami/containers
In order to unify the approaches followed in Bitnami containers and Bitnami charts, we are moving some issues in bitnami/bitnami-docker-<container>
repositories to bitnami/containers
.
Please follow bitnami/containers to keep you updated about the latest bitnami images.
More information here: https://blog.bitnami.com/2022/07/new-source-of-truth-bitnami-containers.html
Hi @harshkolhatkar
With the command at the beginning of this issue, are you facing the same problem?
docker run --detach --rm --name test-openldap --env LDAP_ADMIN_USERNAME=admin --env LDAP_ADMIN_PASSWORD=adminpassword --env LDAP_USERS=customuser --env LDAP_PASSWORDS=custompassword bitnami/openldap:latest
If that command works, please open a new issue. This is a very old topic and maybe is not related to your issue
I was actually able to get it working by exposing those ports on localhost. The logs then started populating once I did ldapsearch on my domain. I guess the logs just say slapd is starting but doesn't confirm it has started successfully, that caused me to believe that there was some error. Thanks for your help!
Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.
Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.
During this year, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.