sed: can't read /opt/bitnami/mongodb/conf/mongodb.conf: No such file or directory
Name and Version
bitnami/mongo:7.0.6
What architecture are you using?
None
What steps will reproduce the bug?
helm show valuesbitnamicharts/mongodb - > mongovalues.yml
change architecture to replicaset
helm install mongo -f mongovalues.yml oci://registry-1.docker.io/bitnamicharts/mongodb
What is the expected behavior?
install mongo replicaset
What do you see instead?
$ kubectl logs -f mongo-mongodb-0
mongodb 02:54:12.06 INFO ==> Advertised Hostname: mongo-mongodb-0.mongo-mongodb-headless.mongo-staging-6.svc.cluster.local
mongodb 02:54:12.06 INFO ==> Advertised Port: 27017
mongodb 02:54:12.06 INFO ==> Data dir empty, checking if the replica set already exists
MongoNetworkError: connect ECONNREFUSED 10.1.0.149:27017
mongodb 02:54:13.34 INFO ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb 02:54:13.35
mongodb 02:54:13.35 Welcome to the Bitnami mongodb container
mongodb 02:54:13.35 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 02:54:13.35 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 02:54:13.35
mongodb 02:54:13.36 INFO ==> ** Starting MongoDB setup **
mongodb 02:54:13.37 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 02:54:13.47 INFO ==> Initializing MongoDB...
sed: can't read /opt/bitnami/mongodb/conf/mongodb.conf: No such file or directory
Are you able to reproduce the issue using the default values?
No, the default architecture is standalone. Which works.
But changing it to replicaset causes this issue.
In order to reproduce the issue on our side, could you please share the parameters that were modified?
architecture: standalone
Changed to
architecture: replicaset
In order to reproduce the issue on our side, could you please share the parameters that were modified?
have you been able to?
Hi @Jaysins
I can't reproduce the issue on my side:
$ cat my-values.yaml
architecture: replicaset
$ helm install mongo -f my-values.yaml oci://registry-1.docker.io/bitnamicharts/mongodb
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-mongodb-0 1/1 Running 0 59s
mongo-mongodb-arbiter-0 1/1 Running 0 59s
$ kubectl logs mongo-mongodb-0
Warning: Use tokens from the TokenRequest API or manually created secret-based tokens instead of auto-generated secret-based tokens.
mongodb 10:39:42.39 INFO ==> Advertised Hostname: mongo-mongodb-0.mongo-mongodb-headless.test.svc.cluster.local
mongodb 10:39:42.39 INFO ==> Advertised Port: 27017
realpath: /bitnami/mongodb/data/db: No such file or directory
architecture: replicaset
mongodb 10:39:42.40 INFO ==> Data dir empty, checking if the replica set already exists
MongoNetworkError: getaddrinfo ENOTFOUND mongo-mongodb-1.mongo-mongodb-headless.test.svc.cluster.local
mongodb 10:39:43.18 INFO ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb 10:39:43.19 INFO ==>
mongodb 10:39:43.20 INFO ==> Welcome to the Bitnami mongodb container
mongodb 10:39:43.20 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 10:39:43.20 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 10:39:43.20 INFO ==>
mongodb 10:39:43.21 INFO ==> ** Starting MongoDB setup **
mongodb 10:39:43.23 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 10:39:43.29 INFO ==> Initializing MongoDB...
mongodb 10:39:43.35 INFO ==> Writing keyfile for replica set authentication...
mongodb 10:39:43.36 INFO ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.20.0.11:27017
mongodb 10:39:44.95 INFO ==> Creating users...
mongodb 10:39:44.96 INFO ==> Creating root user...
What kind of cluster are you deploying the asset in?
Sometimes, it is useful to enable the debug option in order to obtain more detailed logs. Could you please do that by specifying that in your installation params?
+image:
+ debug: true
architecture: replicaset
Could you please provide the output of the following commands as well?
-
kubectl get pvc -
kubectl describe pod/mongo-mongodb-0
I also get this error. Have you found a solution?
sed: can't read /opt/bitnami/mongodb/conf/mongodb.conf: No such file or directory
@serkan1st1 your yaml file should only contain parameters you want to set. Do not include the whole template.
just create a yaml fill with the conf necessary
architecture: replicaset replicaCount: 3
if it still doesn't work use the --set
@joancafom
This is my current setup to install mongo on helm
helm upgrade mongo \
--set architecture=replicaset \
--set replicaCount=3 \
--set persistence.size=500Gi \
--set automountServiceAccountToken=true \
--set externalAccess.enabled=true \
--set externalAccess.autoDiscovery.enabled=true \
--set externalAccess.service.type=LoadBalancer \
--set externalAccess.service.port=27017 \
--set resources.requests.memory="1Gi" \
--set resources.requests.cpu="1000m" \
--set resources.limits.memory="2Gi" \
--set resources.limits.cpu="1500m" \
--set rbac.create=true \
--set containerSecurityContext.runAsGroup=0 \
--set containerSecurityContext.runAsUser=0 \
--set rbac.create=true \
--set readinessProbe.initialDelaySeconds=240 \
--set arbiter.enabled=false \
--namespace mongo-staging-v2 \
--create-namespace \
oci://registry-1.docker.io/bitnamicharts/mongodb
however, my pod never leaves the not ready state and when i run
kubectl describe pod
---- ------ ---- ---- -------
Warning Unhealthy 12s (x1224 over 3h) kubelet Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
Error: Not ready
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-mongodb-0 0/1 Running 1 (3h3m ago) 3h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongo-mongodb-0-external LoadBalancer 172.20.180.59 abf1bb010918b484cba59eb58e7c01a6-206515190.us-east-2.elb.amazonaws.com 27017:31696/TCP 3h4m
service/mongo-mongodb-1-external LoadBalancer 172.20.184.248 a79280639dcd24dcabda0014bc300160-181666778.us-east-2.elb.amazonaws.com 27017:30144/TCP 3h4m
service/mongo-mongodb-2-external LoadBalancer 172.20.169.79 a4606f81b135a42fca2fc070f375fc24-657973497.us-east-2.elb.amazonaws.com 27017:30759/TCP 3h4m
service/mongo-mongodb-headless ClusterIP None <none> 27017/TCP 3h4m
NAME READY AGE
statefulset.apps/mongo-mongodb 0/3 3h4m
and when I check logs
jayson@jayson-IdeaPad-5-15ITL05:~$ kubectl logs -f mongo-mongodb-0
Defaulted container "mongodb" out of: mongodb, auto-discovery (init)
mongodb 11:31:59.24 INFO ==> Advertised Hostname: abf1bb010918b484cba59eb58e7c01a6-206515190.us-east-2.elb.amazonaws.com
mongodb 11:31:59.25 INFO ==> Advertised Port: 27017
mongodb 11:31:59.25 INFO ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb 11:31:59.27 INFO ==>
mongodb 11:31:59.27 INFO ==> Welcome to the Bitnami mongodb container
mongodb 11:31:59.27 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 11:31:59.27 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 11:31:59.28 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
mongodb 11:31:59.28 INFO ==>
mongodb 11:31:59.28 INFO ==> ** Starting MongoDB setup **
mongodb 11:31:59.30 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 11:31:59.34 INFO ==> Initializing MongoDB...
mongodb 11:31:59.40 INFO ==> Writing keyfile for replica set authentication...
mongodb 11:31:59.42 INFO ==> Enabling authentication...
mongodb 11:31:59.42 INFO ==> Deploying MongoDB with persisted data...
mongodb 11:31:59.46 INFO ==> ** MongoDB setup finished! **
mongodb 11:31:59.50 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2024-03-25T11:31:59.555Z"},"s":"I", "c":"CONTROL", "id":5760901, "ctx":"main","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":false}}}}
Apparently to by pass this you first have to install the bitnami/mongo, without the externalservice enabled
helm install mongo \
--set architecture=replicaset \
--set replicaCount=3 \
--set persistence.size=500Gi \
--set resources.requests.memory="1Gi" \
--set resources.requests.cpu="1000m" \
--set resources.limits.memory="2Gi" \
--set resources.limits.cpu="1500m" \
--set readinessProbe.initialDelaySeconds=60 \
--set readinessProbe.periodSeconds=80 \
--set readinessProbe.timeoutSeconds=60 \
--set arbiter.enabled=false \
--namespace mongo-staging-v2 \
--create-namespace \
oci://registry-1.docker.io/bitnamicharts/mongodb
then upgrade, with the options present
helm upgrade mongo \
--set architecture=replicaset \
--set replicaCount=3 \
--set persistence.size=500Gi \
--set automountServiceAccountToken=true \
--set resources.requests.memory="1Gi" \
--set resources.requests.cpu="1000m" \
--set resources.limits.memory="2Gi" \
--set resources.limits.cpu="1500m" \
--set externalAccess.enabled=true \
--set automountServiceAccountToken=true \
--set externalAccess.enabled=true \
--set externalAccess.autoDiscovery.enabled=true \
--set externalAccess.service.type=LoadBalancer \
--set externalAccess.service.port=27017 \
--set rbac.create=true \
--set serviceAccount.create=true \
--set readinessProbe.initialDelaySeconds=60 \
--set readinessProbe.periodSeconds=80 \
--set readinessProbe.timeoutSeconds=60 \
--set arbiter.enabled=false \
--namespace mongo-staging-v2 \
oci://registry-1.docker.io/bitnamicharts/mongodb
however the external URL wont be accessible via mongosh
mongosh admin --host "XXXX.us-east-2.elb.amazonaws.com:1111,YYYY.us-east-2.elb.amazonaws.com:30308,ZZZZ.us-east-2.elb.amazonaws.com:31904" --authenticationDatabase admin -u username -p password
returns MongoNetworkError: getaddrinfo ENOTFOUND
mongo --host "XXXX.us-east-2.elb.amazonaws.com:32246,YYYY.us-east-2.elb.amazonaws.com:30308,ZZZZ.us-east-2.elb.amazonaws.com:31904" --authenticationDatabase admin -u username -p password
this works
Hi @serkan1st1, I still can't hit the issue 😞... It would be nice if you shared more details of your set up. What kind of cluster are you using?
Were you able to solve the issue related to the mongodb.conf file @Jaysins ? Maybe you can share some inputs in here as well.
Regarding your new concerns, let me just say that at first sight it seems to me you are using a wrong endpoint in your mongosh command
XXXX.us-east-2.elb.amazonaws.com:1111
Beware that 1111 does not seem to be the correct port number.
At the same time, you refer to the same one correctly in your mongo command later:
XXXX.us-east-2.elb.amazonaws.com:32246
Having said that, if you are now experiencing a new (different) issue, could you please raise a new ticket? We try to have single-themed issues so that users can easily find solutions to similar problems
Hi, @joancafom, I have the same issue with the mongodb.conf,
sed: can't read /opt/bitnami/mongodb/conf/mongodb.conf: No such file or directory
I think it has to do with the image tag. Try reproducing with image tag 4.4-debian-10.
This is my values.yaml
architecture: replicaset replicaCount: 3 image: registry: docker.io repository: bitnami/mongodb tag: 4.4-debian-10
FWIW we just upgraded the helm chart and mongodb 6.0.6 -> 7.0.7 and chart 14.13.0 -> 15.1.1. No special upgrade procedure was performed, this is on a test server. It seems to have fixed this specific issue.
values.yaml:
mongodb:
image:
debug: true
architecture: replicaset
replicaCount: 3
arbiter:
enabled: false
auth:
enabled: false
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: "1"
memory: "4Gi"
Chart.lock:
dependencies:
- name: mongodb
repository: https://charts.bitnami.com/bitnami
version: 15.1.1
[...]
For us the problem started when we upgraded the chart 14.4.1 -> 14.13.0 if I understood correctly.
We upgraded from 14.13.0 to 15.1.0. Same issues, ReplicaSet configuration won't leave the Ready state in Kubernetes. The healthcheck is failing.
When jump into the pod and exec db.isMaster().isMaster and db.isMaster().secondary, both are false. For some reason the new pod in the Stateful set won't join the other 2 that exists. I can confirm DNS and network connectivity is working.
Enabling debug doesn't seem helpful, as the logs look like the other running pods in the StatefulSet:
Defaulted container "mongodb" out of: mongodb, metrics, generate-tls-certs (init), auto-discovery (init)
mongodb 18:18:51.02 INFO ==> Advertised Hostname: <IP Address omitted>
mongodb 18:18:51.02 INFO ==> Advertised Port: 27017
mongodb 18:18:51.03 INFO ==> Pod name doesn't match initial primary pod name, configuring node as a secondary
mongodb 18:18:51.04 INFO ==>
mongodb 18:18:51.04 INFO ==> Welcome to the Bitnami mongodb container
mongodb 18:18:51.04 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 18:18:51.05 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 18:18:51.05 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
mongodb 18:18:51.05 INFO ==>
mongodb 18:18:51.05 DEBUG ==> Copying files from /opt/bitnami/mongodb/conf.default to /opt/bitnami/mongodb/conf
mongodb 18:18:51.06 INFO ==> ** Starting MongoDB setup **
mongodb 18:18:51.07 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 18:18:51.12 INFO ==> Initializing MongoDB...
mongodb 18:18:51.13 DEBUG ==> /mongodb.conf mounted. Skipping setting port and IPv6 settings
mongodb 18:18:51.13 DEBUG ==> /mongodb.conf mounted. Skipping setting log settings
mongodb 18:18:51.13 DEBUG ==> /mongodb.conf mounted. Skipping setting log settings
mongodb 18:18:51.13 DEBUG ==> /mongodb.conf mounted. Skipping setting storage settings
mongodb 18:18:51.14 INFO ==> Writing keyfile for replica set authentication...
mongodb 18:18:51.14 DEBUG ==> /mongodb.conf mounted. Skipping keyfile location configuration
mongodb 18:18:51.15 DEBUG ==> /mongodb.conf mounted. Skipping authorization enabling
mongodb 18:18:51.15 INFO ==> Deploying MongoDB with persisted data...
mongodb 18:18:51.15 DEBUG ==> /mongodb.conf mounted. Skipping replicaset mode enabling
mongodb 18:18:51.15 DEBUG ==> /mongodb.conf mounted. Skipping authorization enabling
mongodb 18:18:51.16 DEBUG ==> /mongodb.conf mounted. Skipping IP binding to all addresses
mongodb 18:18:51.16 DEBUG ==> Skipping loading custom scripts on non-primary nodes...
mongodb 18:18:51.16 INFO ==> ** MongoDB setup finished! **
mongodb 18:18:51.18 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2024-04-04T18:18:51.224Z"},"s":"I", "c":"CONTROL", "id":5760901, "ctx":"main","msg":"Applied --setParameter options","attr":{"serverParameters":{"enableLocalhostAuthBypass":{"default":true,"value":false}}}}
One thing I've noticed is the root filesystem, most things are owned by root but now mongo is running as UID 1001. Is it possible this patch broke upgrades and not new installations?
Edit: Yes the other Pods (running chart version 14.13.0) have the runAsGroup: 0 setting, whereas the new Pod is runAsGroup: 1001.
Hello! Have you found a solution? I get this error too.
During my first upgrade i got the same issue
- Helm chart upgrade from 14.8.0 to 14.13.0
- Image tag: 5.0.24 (image ref was
5.0.24-debian-11-r2).
Using command
crictl inspecti docker.io/bitnami/mongodb:5.0.24i found org.opencontainers.image.ref.name": "5.0.24-debian-11-r2"
Some values:
```
architecture: standalone
useStatefulSet: true
tls:
enabled: true
autoGenerated: false
standalone:
existingSecret: mongodb-tls-selfsigned
mode: allowTLS
mTLS:
enabled: false
extraFlags:
- --tlsAllowConnectionsWithoutCertificates
```
I found it does not download the latest image ref of 5.0.24 so i forced the image pull.
- Image tag: 5.0.24 (image ref is now
5.0.24-debian-11-r20)
So ensure you upgrade to latest digest of your image version (you can add image.pullPolicy: Always in helm values)
After that i upgraded Chart to 15.1.3 and i hit the error below:
mongodb 21:16:58.81 INFO ==>
mongodb 21:16:58.81 INFO ==> Welcome to the Bitnami mongodb container
mongodb 21:16:58.82 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 21:16:58.82 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 21:16:58.82 INFO ==>
cp: cannot open '/opt/bitnami/mongodb/conf.default/./mongodb.conf' for reading: Permission denied
That file inside the container has perms below:
$ ls -ld $MONGODB_BASE_DIR/conf.default/mongodb.conf
-rw-r----- 1 root root 1018 Feb 19 15:16 /opt/bitnami/mongodb/conf.default/mongodb.conf
So the only way i found to make it work is changing security context back to 0:
podSecurityContext:
fsGroup: 0
containerSecurityContext:
runAsGroup: 0
Hi everyone, thanks for the input!
I can reproduce the issue if I follow the steps in recent replies (in which the chart is being upgraded from version 14.x.x to 15.x.x but still uses an old image). First of all, let me explain why this happens...
We recently released the new major version 15.x.x of the bitnami/mongodb chart, you can see the release notes here. In this new major, the security defaults have been updated and the chart runs with more secure options by default.
These changes required some previous adaptation in the docker images, that were included in time in them before the major release of the chart. This means that older versions/tags (that do not include those changes) of the bitnami/mongo image will not work properly out-of-the-box with the new version of the chart.
If you still want to use older images in the chart, you can revert the new security defaults as explained in the release notes.
Hi @joancafom, i'm using your latest MongoDB 5.0 image (5.0.24-debian-11-r20) but we still got some issue
If MongoDB 5.0 (and 6.0) images are not supported in Helm chart version 15.x.y, probabily it should be better to add in the Upgrade release notes.
In MongoDB 7.0 latest images the file indicated in the error appears to have the correct permissions instead:
I have no name!@6c78d2c024b1:/opt/bitnami/mongodb$ ls -ld /opt/bitnami/mongodb/conf.default/mongodb.conf -rw-r--r-- 1 root root 1018 Apr 9 20:26 /opt/bitnami/mongodb/conf.default/mongodb.conf
If possible, latest MongoDB 5.0 and 6.0 images (which are still not EOL) should be fixed to be compatible to 15.x.y helm chart version
@joancafom which docker images work with the latest helm chart then? I'll happily use them. Is there a reason the latest docker image is not bundled with the Helm chart? That seems broken out of the box.
i found solution, need specify
configuration: |
# mongod.conf
.......
I had the same problem while trying to apply some new values to my chart.
In my case, I'm running a sharded cluster on Kubernetes, but fixed it by upgrading the image.
Non-working image: bitnami/mongodb-sharded:6.0.11-debian-11-r1
Working image: bitnami/mongodb-sharded:6.0.13-debian-11-r20
Hope this helps somebody, I almost lost my entire cluster this morning 😢
So the question then is - when will Bitnami use the functional images in the Helm Chart?
Hi everyone,
Is anyone facing issues using latest MongoDB chart version (15.1.4) and the latest image included by default in the chart (7.0.8-debian-12-r2)?
If possible, latest MongoDB 5.0 and 6.0 images (which are still not EOL) should be fixed to be compatible to 15.x.y helm chart version
@kladiv is not possible for Bitnami to overwrite a previously published image. The workaround to use a previos version that isn't compatible with the new security defaults is to revert these defaults as explained in the release notes.
@kladiv is not possible for Bitnami to overwrite a previously published image. The workaround to use a previos version that isn't compatible with the new security defaults is to revert these defaults as explained in the release notes.
You could release newer versions of the images like 5.0.24-debian-11-r21 or 6.0.13-debian-11-r22 (not present at the moment)
also mongodb 6.0.14 (released end of February) and 6.0.15 are missing
Hi @kladiv @lombardialess
I'm afraid we cannot release new versions of the 5.x and 6.x branches since MongoDB is not publishing binaries for Debian 12.
It's not also possible for our systems to release revisions of versions that we no longer support. Therefore, for those using these versions, I'm afraid the only alternative is to revert the security defaults as I previously mentioned.
Hi @juan131 maybe you can evaluate a new approach:
a) write in Upgrade release notes the "images" branch versions supported by each Helm Chart version (in this case 7.x)
b) it would be great if Bitnami (if applicable by your policies) still provide images version that are still not EOL by the Vendor. For MongoDB 5.x and 6.x versions are not EOL.
c) in the README of each chart, create a table to track supported/unsupported image branch versions (or dunno if you can reference any other webpage where this information is already provided)
This method maybe could be also adopted for your other Bitnami docker images/charts.
Thank you
Hi @kladiv
We have some internal discussions currently about creating changelog(s) in the charts so we can document this kind of stuff. More news to come.
Regarding "option b", we do support 5.x and 6.x in the VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog which support other distros (e.g. Ubuntu, RedHat or PhotonOS). However, it's not possible to support them for Debian 12 (the one used on Bitnami Application Catalog) since MongoDB is not publishing binaries for this distro.
My concern is there is a serious security CVE for MongoDB ( https://jira.mongodb.org/browse/SERVER-72839 ) being addressed in version 6.0.14 and we can't get the binary on the https://hub.docker.com/r/bitnami/mongodb tagged as 6.0.14.
BTW, the tags are showing as debian 11 not 12.
Did the chart change to using debian 12 for the 6.x MongoDB container?