charts
charts copied to clipboard
[bitnami/mongodb]:`arbiter` and `secondary` can not connect to `primary` if `auth.enabled: false` in `replicaset` mode. Tag >= `4.4.13-debian-10-r50`
Name and Version
bitnami/mongodb 12.1.20
What steps will reproduce the bug?
Hey!
Tag >= 4.4.13-debian-10-r50
:
-
helm install mongodb bitnami/mongodb --create-namespace --namespace mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r50 --set architecture=replicaset
I seriously was trying to debug this as much as possible but ended up just narrowing this down to image tags that is working vs not working
Are you using any custom parameters or values?
Custom parameters are:
--create-namespace --namespace mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r50 --set architecture=replicaset
What is the expected behavior?
Tag <= 4.4.13-debian-10-r48
:
-
helm template mongodb bitnami/mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r48 --set architecture=replicaset > manifest.yaml
- Modify
mongosh
->mongo
insetup.sh
,startup-probe.sh
,readiness-probe.sh
andping-mongodb.sh
inmanifest.yaml
in order to fixlivenessProbe
andreadinessProbe
.r48
image simply does not havemongosh
in it:
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-common-scripts
namespace: "mongodb"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.20
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
data:
startup-probe.sh: |
#!/bin/bash
mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
readiness-probe.sh: |
#!/bin/bash
# Run the proper check depending on the version
[[ $(mongod -version | grep "db version") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}
. /opt/bitnami/scripts/libversion.sh
VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)"
VERSION_MINOR="$(get_sematic_version "$VERSION" 2)"
VERSION_PATCH="$(get_sematic_version "$VERSION" 3)"
if [[ "$VERSION_MAJOR" -ge 5 ]] || [[ "$VERSION_MAJOR" -ge 4 ]] && [[ "$VERSION_MINOR" -ge 4 ]] && [[ "$VERSION_PATCH" -ge 2 ]]; then
mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
else
mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true'
fi
ping-mongodb.sh: |
#!/bin/bash
mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval "db.adminCommand('ping')"
---
# Source: mongodb/templates/replicaset/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-scripts
namespace: "mongodb"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.20
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
data:
setup.sh: |-
#!/bin/bash
. /opt/bitnami/scripts/mongodb-env.sh
. /opt/bitnami/scripts/libfs.sh
. /opt/bitnami/scripts/liblog.sh
. /opt/bitnami/scripts/libvalidations.sh
if is_empty_value "$MONGODB_ADVERTISED_PORT_NUMBER"; then
export MONGODB_ADVERTISED_PORT_NUMBER="$MONGODB_PORT_NUMBER"
fi
info "Advertised Hostname: $MONGODB_ADVERTISED_HOSTNAME"
info "Advertised Port: $MONGODB_ADVERTISED_PORT_NUMBER"
# Check for existing replica set in case there is no data in the PVC
# This is for cases where the PVC is lost or for MongoDB caches without
# persistence
current_primary=""
if is_dir_empty "${MONGODB_DATA_DIR}/db"; then
info "Data dir empty, checking if the replica set already exists"
current_primary=$(mongo admin --host "mongodb-0.mongodb-headless.mongodb:27017,mongodb-1.mongodb-headless.mongodb:27017,mongodb-2.mongodb-headless.mongodb:27017,mongodb-3.mongodb-headless.mongodb:27017" --eval 'db.runCommand("ismaster")' | awk -F\' '/primary/ {print $2}')
if ! is_empty_value "$current_primary"; then
info "Detected existing primary: ${current_primary}"
fi
fi
if ! is_empty_value "$current_primary" && [[ "$MONGODB_ADVERTISED_HOSTNAME:$MONGODB_ADVERTISED_PORT_NUMBER" == "$current_primary" ]]; then
info "Advertised name matches current primary, configuring node as a primary"
export MONGODB_REPLICA_SET_MODE="primary"
elif ! is_empty_value "$current_primary" && [[ "$MONGODB_ADVERTISED_HOSTNAME:$MONGODB_ADVERTISED_PORT_NUMBER" != "$current_primary" ]]; then
info "Current primary is different from this node. Configuring the node as replica of ${current_primary}"
export MONGODB_REPLICA_SET_MODE="secondary"
export MONGODB_INITIAL_PRIMARY_HOST="${current_primary%:*}"
export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="${current_primary#*:}"
export MONGODB_SET_SECONDARY_OK="yes"
info "MONGODB_REPLICA_SET_MODE is $MONGODB_REPLICA_SET_MODE"
info "MONGODB_INITIAL_PRIMARY_PORT_NUMBER is $MONGODB_INITIAL_PRIMARY_PORT_NUMBER"
elif [[ "$MY_POD_NAME" = "mongodb-0" ]]; then
info "Pod name matches initial primary pod name, configuring node as a primary"
export MONGODB_REPLICA_SET_MODE="primary"
else
info "Pod name doesn't match initial primary pod name, configuring node as a secondary"
export MONGODB_REPLICA_SET_MODE="secondary"
export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="$MONGODB_PORT_NUMBER"
info "MONGODB_REPLICA_SET_MODE is $MONGODB_REPLICA_SET_MODE"
info "MONGODB_INITIAL_PRIMARY_PORT_NUMBER is $MONGODB_INITIAL_PRIMARY_PORT_NUMBER"
fi
if [[ "$MONGODB_REPLICA_SET_MODE" == "secondary" ]]; then
export MONGODB_INITIAL_PRIMARY_ROOT_USER="$MONGODB_ROOT_USER"
export MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD="$MONGODB_ROOT_PASSWORD"
export MONGODB_ROOT_PASSWORD=""
export MONGODB_EXTRA_USERNAMES=""
export MONGODB_EXTRA_DATABASES=""
export MONGODB_EXTRA_PASSWORDS=""
export MONGODB_ROOT_PASSWORD_FILE=""
export MONGODB_EXTRA_USERNAMES_FILE=""
export MONGODB_EXTRA_DATABASES_FILE=""
export MONGODB_EXTRA_PASSWORDS_FILE=""
fi
exec /opt/bitnami/scripts/mongodb/entrypoint.sh /opt/bitnami/scripts/mongodb/run.sh
---
- Now you good to deploy:
kubectl apply -f manifest.yaml
-
kubectl exec -it mongodb-0 -- bash
and we see all 3members
in place:
I have no name!@mongodb-0:/$ mongo
MongoDB shell version v4.4.13
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("7d9afa99-d527-4c7c-88fd-380b1f1059a3") }
MongoDB server version: 4.4.13
---
The server generated these startup warnings when booting:
2022-06-22T00:36:09.733+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2022-06-22T00:37:48.305Z"),
"myState" : 1,
"term" : NumberLong(3),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"votingMembersCount" : 3,
"writableVotingMembersCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"lastCommittedWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"readConcernMajorityWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"appliedOpTime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"durableOpTime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1655858221, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2022-06-22T00:36:21.592Z"),
"electionTerm" : NumberLong(3),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1655857836, 1),
"t" : NumberLong(2)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 5,
"electionTimeoutMillis" : NumberLong(10000),
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2022-06-22T00:36:21.596Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2022-06-22T00:36:21.698Z")
},
"members" : [
{
"_id" : 0,
"name" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 99,
"optime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2022-06-22T00:37:41Z"),
"lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "Could not find member to sync from",
"electionTime" : Timestamp(1655858181, 1),
"electionDate" : ISODate("2022-06-22T00:36:21Z"),
"configVersion" : 5,
"configTerm" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "mongodb-arbiter-0.mongodb-arbiter-headless.mongodb.svc.cluster.local:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 82,
"lastHeartbeat" : ISODate("2022-06-22T00:37:47.519Z"),
"lastHeartbeatRecv" : ISODate("2022-06-22T00:37:47.520Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 5,
"configTerm" : 3
},
{
"_id" : 2,
"name" : "mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 73,
"optime" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"optimeDurable" : {
"ts" : Timestamp(1655858261, 1),
"t" : NumberLong(3)
},
"optimeDate" : ISODate("2022-06-22T00:37:41Z"),
"optimeDurableDate" : ISODate("2022-06-22T00:37:41Z"),
"lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z"),
"lastHeartbeat" : ISODate("2022-06-22T00:37:47.520Z"),
"lastHeartbeatRecv" : ISODate("2022-06-22T00:37:46.856Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 5,
"configTerm" : 3
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1655858261, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1655858261, 1)
}
What do you see instead?
Nodes failed to connect to primary
:
-
kubectl logs mongodb-arbiter-0 --follow
:
mongodb 00:17:23.18
mongodb 00:17:23.18 Welcome to the Bitnami mongodb container
mongodb 00:17:23.18 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 00:17:23.18 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 00:17:23.19
mongodb 00:17:23.19 INFO ==> ** Starting MongoDB setup **
mongodb 00:17:23.20 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 00:17:23.21 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mongodb 00:17:23.22 INFO ==> Initializing MongoDB...
mongodb 00:17:23.23 INFO ==> Deploying MongoDB from scratch...
mongodb 00:17:24.57 INFO ==> Creating users...
mongodb 00:17:24.57 INFO ==> Users created
mongodb 00:17:24.59 INFO ==> Configuring MongoDB replica set...
mongodb 00:17:24.59 INFO ==> Stopping MongoDB...
mongodb 00:17:28.09 INFO ==> Trying to connect to MongoDB server mongodb-0.mongodb-headless.mongodb.svc.cluster.local...
cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host
cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host
cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host
mongodb 00:17:43.12 INFO ==> Found MongoDB server listening at mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017 !
mongodb 00:21:34.35 ERROR ==> Node mongodb-0.mongodb-headless.mongodb.svc.cluster.local did not become available
mongodb 00:21:34.35 INFO ==> Stopping MongoDB...
-
kubectl exec -it mongodb-0 -- bash
:
I have no name!@mongodb-0:/$ mongo
MongoDB shell version v4.4.13
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("0935d88e-184a-4f00-b162-57065f0e2531") }
MongoDB server version: 4.4.13
---
The server generated these startup warnings when booting:
2022-06-22T00:17:46.932+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2022-06-22T00:23:48.178Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
"votingMembersCount" : 1,
"writableVotingMembersCount" : 1,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1655857427, 1),
"t" : NumberLong(2)
},
"lastCommittedWallTime" : ISODate("2022-06-22T00:23:47.650Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1655857427, 1),
"t" : NumberLong(2)
},
"readConcernMajorityWallTime" : ISODate("2022-06-22T00:23:47.650Z"),
"appliedOpTime" : {
"ts" : Timestamp(1655857427, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1655857427, 1),
"t" : NumberLong(2)
},
"lastAppliedWallTime" : ISODate("2022-06-22T00:23:47.650Z"),
"lastDurableWallTime" : ISODate("2022-06-22T00:23:47.650Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1655857427, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2022-06-22T00:17:47.641Z"),
"electionTerm" : NumberLong(2),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1655857065, 9),
"t" : NumberLong(1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 5,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2022-06-22T00:17:47.644Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2022-06-22T00:17:47.745Z")
},
"members" : [
{
"_id" : 0,
"name" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 362,
"optime" : {
"ts" : Timestamp(1655857427, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2022-06-22T00:23:47Z"),
"lastAppliedWallTime" : ISODate("2022-06-22T00:23:47.650Z"),
"lastDurableWallTime" : ISODate("2022-06-22T00:23:47.650Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1655857067, 1),
"electionDate" : ISODate("2022-06-22T00:17:47Z"),
"configVersion" : 1,
"configTerm" : 2,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1655857427, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1655857427, 1)
}
-
kubectl exec -it mongodb-1 -- bash
:
I have no name!@mongodb-1:/$ mongo
MongoDB shell version v4.4.13
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ad8102f6-931b-469c-a71e-51cba3bad370") }
MongoDB server version: 4.4.13
---
The server generated these startup warnings when booting:
2022-06-22T00:22:00.054+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> rs.status()
{
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
Additional information
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-200-37-89.ec2.internal Ready <none> 26h v1.22.9-eks-810597c 10.200.37.89 <none> Amazon Linux 2 5.4.196-108.356.amzn2.x86_64 containerd://1.4.13
ip-10-200-38-202.ec2.internal Ready <none> 26h v1.22.9-eks-810597c 10.200.38.202 <none> Amazon Linux 2 5.4.196-108.356.amzn2.x86_64 containerd://1.4.13
Are you able to reproduce the issue by not overriding the image.tag
? Just deploying the Helm chart with any other custom parameter but keeping the default container image bundled in the Helm chart
Yes.
Issue is reproducible with all image tags >= 4.4.13-debian-10-r50
including most recent one 5.0.9-debian-10-r15
Images with tags <= 4.4.13-debian-10-r48
are fine.
4.4.13-debian-10-r48
is the last one that works. The next one: 4.4.13-debian-10-r50
and every other one after it are not working.
I have no name!@mongodb-0:/$ mongo
MongoDB shell version v5.0.9
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("75d7fbcb-4f92-41b9-b9a5-3c501e8a8037") }
MongoDB server version: 5.0.9
================
Warning: the "mongo" shell has been superseded by "mongosh",
which delivers improved usability and compatibility.The "mongo" shell has been deprecated and will be removed in
an upcoming release.
For installation instructions, see
https://docs.mongodb.com/mongodb-shell/install/
================
---
The server generated these startup warnings when booting:
2022-06-22T15:10:33.753+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnot
es-filesystem
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2022-06-22T15:11:35.624Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
"votingMembersCount" : 1,
"writableVotingMembersCount" : 1,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1655910694, 1),
"t" : NumberLong(2)
},
"lastCommittedWallTime" : ISODate("2022-06-22T15:11:34.471Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1655910694, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1655910694, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1655910694, 1),
"t" : NumberLong(2)
},
"lastAppliedWallTime" : ISODate("2022-06-22T15:11:34.471Z"),
"lastDurableWallTime" : ISODate("2022-06-22T15:11:34.471Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1655910684, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2022-06-22T15:10:34.466Z"),
"electionTerm" : NumberLong(2),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1655910632, 15),
"t" : NumberLong(1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 5,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2022-06-22T15:10:34.469Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2022-06-22T15:10:34.474Z")
},
"members" : [
{
"_id" : 0,
"name" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 62,
"optime" : {
"ts" : Timestamp(1655910694, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2022-06-22T15:11:34Z"),
"lastAppliedWallTime" : ISODate("2022-06-22T15:11:34.471Z"),
"lastDurableWallTime" : ISODate("2022-06-22T15:11:34.471Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "Could not find member to sync from",
"electionTime" : Timestamp(1655910634, 1),
"electionDate" : ISODate("2022-06-22T15:10:34Z"),
"configVersion" : 1,
"configTerm" : 2,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1655910694, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1655910694, 1)
}
arbiter logs:
$ kubectl logs mongodb-arbiter-0 --follow
mongodb 15:10:14.48
mongodb 15:10:14.48 Welcome to the Bitnami mongodb container
mongodb 15:10:14.48 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb 15:10:14.48 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb 15:10:14.48
mongodb 15:10:14.48 INFO ==> ** Starting MongoDB setup **
mongodb 15:10:14.50 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 15:10:14.52 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mongodb 15:10:14.53 INFO ==> Initializing MongoDB...
mongodb 15:10:14.54 INFO ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.200.32.115:27017
mongodb 15:10:15.90 INFO ==> Creating users...
mongodb 15:10:15.90 INFO ==> Users created
mongodb 15:10:15.92 INFO ==> Configuring MongoDB replica set...
mongodb 15:10:15.92 INFO ==> Stopping MongoDB...
mongodb 15:10:19.24 INFO ==> Trying to connect to MongoDB server mongodb-0.mongodb-headless.mongodb.svc.cluster.local...
cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host
cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host
mongodb 15:10:30.27 INFO ==> Found MongoDB server listening at mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017 !
mongodb 15:14:14.13 ERROR ==> Node mongodb-0.mongodb-headless.mongodb.svc.cluster.local did not become available
mongodb 15:14:14.13 INFO ==> Stopping MongoDB...
Hi @eugene-marchanka, it seems like this started to fail around the time we added mongo-shell
, which happened in 4.4.13-debian-10-r50
. I will create an internal task for investigating this.
I can confirm this issue. I have the same problem when creating a new replica set.
And the workaround with using 4.4.13-debian-10-r48
is also working
Name and Version
bitnami/mongodb 12.1.20
What steps will reproduce the bug?
Hey!
Tag >=
4.4.13-debian-10-r50
:
helm install mongodb bitnami/mongodb --create-namespace --namespace mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r50 --set architecture=replicaset
I seriously was trying to debug this as much as possible but ended up just narrowing this down to image tags that is working vs not working
Are you using any custom parameters or values?
Custom parameters are:
--create-namespace --namespace mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r50 --set architecture=replicaset
What is the expected behavior?
Tag <=
4.4.13-debian-10-r48
:
helm template mongodb bitnami/mongodb --set auth.enabled=false --set image.tag=4.4.13-debian-10-r48 --set architecture=replicaset > manifest.yaml
- Modify
mongosh
->mongo
insetup.sh
,startup-probe.sh
,readiness-probe.sh
andping-mongodb.sh
inmanifest.yaml
in order to fixlivenessProbe
andreadinessProbe
.r48
image simply does not havemongosh
in it:apiVersion: v1 kind: ConfigMap metadata: name: mongodb-common-scripts namespace: "mongodb" labels: app.kubernetes.io/name: mongodb helm.sh/chart: mongodb-12.1.20 app.kubernetes.io/instance: mongodb app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: mongodb data: startup-probe.sh: | #!/bin/bash mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true' readiness-probe.sh: | #!/bin/bash # Run the proper check depending on the version [[ $(mongod -version | grep "db version") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]} . /opt/bitnami/scripts/libversion.sh VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)" VERSION_MINOR="$(get_sematic_version "$VERSION" 2)" VERSION_PATCH="$(get_sematic_version "$VERSION" 3)" if [[ "$VERSION_MAJOR" -ge 5 ]] || [[ "$VERSION_MAJOR" -ge 4 ]] && [[ "$VERSION_MINOR" -ge 4 ]] && [[ "$VERSION_PATCH" -ge 2 ]]; then mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true' else mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true' fi ping-mongodb.sh: | #!/bin/bash mongo $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval "db.adminCommand('ping')" --- # Source: mongodb/templates/replicaset/scripts-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: mongodb-scripts namespace: "mongodb" labels: app.kubernetes.io/name: mongodb helm.sh/chart: mongodb-12.1.20 app.kubernetes.io/instance: mongodb app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: mongodb data: setup.sh: |- #!/bin/bash . /opt/bitnami/scripts/mongodb-env.sh . /opt/bitnami/scripts/libfs.sh . /opt/bitnami/scripts/liblog.sh . /opt/bitnami/scripts/libvalidations.sh if is_empty_value "$MONGODB_ADVERTISED_PORT_NUMBER"; then export MONGODB_ADVERTISED_PORT_NUMBER="$MONGODB_PORT_NUMBER" fi info "Advertised Hostname: $MONGODB_ADVERTISED_HOSTNAME" info "Advertised Port: $MONGODB_ADVERTISED_PORT_NUMBER" # Check for existing replica set in case there is no data in the PVC # This is for cases where the PVC is lost or for MongoDB caches without # persistence current_primary="" if is_dir_empty "${MONGODB_DATA_DIR}/db"; then info "Data dir empty, checking if the replica set already exists" current_primary=$(mongo admin --host "mongodb-0.mongodb-headless.mongodb:27017,mongodb-1.mongodb-headless.mongodb:27017,mongodb-2.mongodb-headless.mongodb:27017,mongodb-3.mongodb-headless.mongodb:27017" --eval 'db.runCommand("ismaster")' | awk -F\' '/primary/ {print $2}') if ! is_empty_value "$current_primary"; then info "Detected existing primary: ${current_primary}" fi fi if ! is_empty_value "$current_primary" && [[ "$MONGODB_ADVERTISED_HOSTNAME:$MONGODB_ADVERTISED_PORT_NUMBER" == "$current_primary" ]]; then info "Advertised name matches current primary, configuring node as a primary" export MONGODB_REPLICA_SET_MODE="primary" elif ! is_empty_value "$current_primary" && [[ "$MONGODB_ADVERTISED_HOSTNAME:$MONGODB_ADVERTISED_PORT_NUMBER" != "$current_primary" ]]; then info "Current primary is different from this node. Configuring the node as replica of ${current_primary}" export MONGODB_REPLICA_SET_MODE="secondary" export MONGODB_INITIAL_PRIMARY_HOST="${current_primary%:*}" export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="${current_primary#*:}" export MONGODB_SET_SECONDARY_OK="yes" info "MONGODB_REPLICA_SET_MODE is $MONGODB_REPLICA_SET_MODE" info "MONGODB_INITIAL_PRIMARY_PORT_NUMBER is $MONGODB_INITIAL_PRIMARY_PORT_NUMBER" elif [[ "$MY_POD_NAME" = "mongodb-0" ]]; then info "Pod name matches initial primary pod name, configuring node as a primary" export MONGODB_REPLICA_SET_MODE="primary" else info "Pod name doesn't match initial primary pod name, configuring node as a secondary" export MONGODB_REPLICA_SET_MODE="secondary" export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="$MONGODB_PORT_NUMBER" info "MONGODB_REPLICA_SET_MODE is $MONGODB_REPLICA_SET_MODE" info "MONGODB_INITIAL_PRIMARY_PORT_NUMBER is $MONGODB_INITIAL_PRIMARY_PORT_NUMBER" fi if [[ "$MONGODB_REPLICA_SET_MODE" == "secondary" ]]; then export MONGODB_INITIAL_PRIMARY_ROOT_USER="$MONGODB_ROOT_USER" export MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD="$MONGODB_ROOT_PASSWORD" export MONGODB_ROOT_PASSWORD="" export MONGODB_EXTRA_USERNAMES="" export MONGODB_EXTRA_DATABASES="" export MONGODB_EXTRA_PASSWORDS="" export MONGODB_ROOT_PASSWORD_FILE="" export MONGODB_EXTRA_USERNAMES_FILE="" export MONGODB_EXTRA_DATABASES_FILE="" export MONGODB_EXTRA_PASSWORDS_FILE="" fi exec /opt/bitnami/scripts/mongodb/entrypoint.sh /opt/bitnami/scripts/mongodb/run.sh ---
- Now you good to deploy:
kubectl apply -f manifest.yaml
kubectl exec -it mongodb-0 -- bash
and we see all 3members
in place:I have no name!@mongodb-0:/$ mongo MongoDB shell version v4.4.13 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("7d9afa99-d527-4c7c-88fd-380b1f1059a3") } MongoDB server version: 4.4.13 --- The server generated these startup warnings when booting: 2022-06-22T00:36:09.733+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem --- --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2022-06-22T00:37:48.305Z"), "myState" : 1, "term" : NumberLong(3), "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, "writableVotingMembersCount" : 2, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "lastCommittedWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "readConcernMajorityOpTime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "readConcernMajorityWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "appliedOpTime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z") }, "lastStableRecoveryTimestamp" : Timestamp(1655858221, 1), "electionCandidateMetrics" : { "lastElectionReason" : "electionTimeout", "lastElectionDate" : ISODate("2022-06-22T00:36:21.592Z"), "electionTerm" : NumberLong(3), "lastCommittedOpTimeAtElection" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "lastSeenOpTimeAtElection" : { "ts" : Timestamp(1655857836, 1), "t" : NumberLong(2) }, "numVotesNeeded" : 2, "priorityAtElection" : 5, "electionTimeoutMillis" : NumberLong(10000), "numCatchUpOps" : NumberLong(0), "newTermStartDate" : ISODate("2022-06-22T00:36:21.596Z"), "wMajorityWriteAvailabilityDate" : ISODate("2022-06-22T00:36:21.698Z") }, "members" : [ { "_id" : 0, "name" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 99, "optime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2022-06-22T00:37:41Z"), "lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "Could not find member to sync from", "electionTime" : Timestamp(1655858181, 1), "electionDate" : ISODate("2022-06-22T00:36:21Z"), "configVersion" : 5, "configTerm" : 3, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "mongodb-arbiter-0.mongodb-arbiter-headless.mongodb.svc.cluster.local:27017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 82, "lastHeartbeat" : ISODate("2022-06-22T00:37:47.519Z"), "lastHeartbeatRecv" : ISODate("2022-06-22T00:37:47.520Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 5, "configTerm" : 3 }, { "_id" : 2, "name" : "mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 73, "optime" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "optimeDurable" : { "ts" : Timestamp(1655858261, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2022-06-22T00:37:41Z"), "optimeDurableDate" : ISODate("2022-06-22T00:37:41Z"), "lastAppliedWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "lastDurableWallTime" : ISODate("2022-06-22T00:37:41.598Z"), "lastHeartbeat" : ISODate("2022-06-22T00:37:47.520Z"), "lastHeartbeatRecv" : ISODate("2022-06-22T00:37:46.856Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 5, "configTerm" : 3 } ], "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1655858261, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1655858261, 1) }
What do you see instead?
Nodes failed to connect to
primary
:
kubectl logs mongodb-arbiter-0 --follow
:mongodb 00:17:23.18 mongodb 00:17:23.18 Welcome to the Bitnami mongodb container mongodb 00:17:23.18 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb mongodb 00:17:23.18 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues mongodb 00:17:23.19 mongodb 00:17:23.19 INFO ==> ** Starting MongoDB setup ** mongodb 00:17:23.20 INFO ==> Validating settings in MONGODB_* env vars... mongodb 00:17:23.21 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment. mongodb 00:17:23.22 INFO ==> Initializing MongoDB... mongodb 00:17:23.23 INFO ==> Deploying MongoDB from scratch... mongodb 00:17:24.57 INFO ==> Creating users... mongodb 00:17:24.57 INFO ==> Users created mongodb 00:17:24.59 INFO ==> Configuring MongoDB replica set... mongodb 00:17:24.59 INFO ==> Stopping MongoDB... mongodb 00:17:28.09 INFO ==> Trying to connect to MongoDB server mongodb-0.mongodb-headless.mongodb.svc.cluster.local... cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host cannot resolve host "mongodb-0.mongodb-headless.mongodb.svc.cluster.local": lookup mongodb-0.mongodb-headless.mongodb.svc.cluster.local: no such host mongodb 00:17:43.12 INFO ==> Found MongoDB server listening at mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017 ! mongodb 00:21:34.35 ERROR ==> Node mongodb-0.mongodb-headless.mongodb.svc.cluster.local did not become available mongodb 00:21:34.35 INFO ==> Stopping MongoDB...
kubectl exec -it mongodb-0 -- bash
:I have no name!@mongodb-0:/$ mongo MongoDB shell version v4.4.13 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("0935d88e-184a-4f00-b162-57065f0e2531") } MongoDB server version: 4.4.13 --- The server generated these startup warnings when booting: 2022-06-22T00:17:46.932+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem --- --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2022-06-22T00:23:48.178Z"), "myState" : 1, "term" : NumberLong(2), "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 1, "writeMajorityCount" : 1, "votingMembersCount" : 1, "writableVotingMembersCount" : 1, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1655857427, 1), "t" : NumberLong(2) }, "lastCommittedWallTime" : ISODate("2022-06-22T00:23:47.650Z"), "readConcernMajorityOpTime" : { "ts" : Timestamp(1655857427, 1), "t" : NumberLong(2) }, "readConcernMajorityWallTime" : ISODate("2022-06-22T00:23:47.650Z"), "appliedOpTime" : { "ts" : Timestamp(1655857427, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1655857427, 1), "t" : NumberLong(2) }, "lastAppliedWallTime" : ISODate("2022-06-22T00:23:47.650Z"), "lastDurableWallTime" : ISODate("2022-06-22T00:23:47.650Z") }, "lastStableRecoveryTimestamp" : Timestamp(1655857427, 1), "electionCandidateMetrics" : { "lastElectionReason" : "electionTimeout", "lastElectionDate" : ISODate("2022-06-22T00:17:47.641Z"), "electionTerm" : NumberLong(2), "lastCommittedOpTimeAtElection" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "lastSeenOpTimeAtElection" : { "ts" : Timestamp(1655857065, 9), "t" : NumberLong(1) }, "numVotesNeeded" : 1, "priorityAtElection" : 5, "electionTimeoutMillis" : NumberLong(10000), "newTermStartDate" : ISODate("2022-06-22T00:17:47.644Z"), "wMajorityWriteAvailabilityDate" : ISODate("2022-06-22T00:17:47.745Z") }, "members" : [ { "_id" : 0, "name" : "mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 362, "optime" : { "ts" : Timestamp(1655857427, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2022-06-22T00:23:47Z"), "lastAppliedWallTime" : ISODate("2022-06-22T00:23:47.650Z"), "lastDurableWallTime" : ISODate("2022-06-22T00:23:47.650Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1655857067, 1), "electionDate" : ISODate("2022-06-22T00:17:47Z"), "configVersion" : 1, "configTerm" : 2, "self" : true, "lastHeartbeatMessage" : "" } ], "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1655857427, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1655857427, 1) }
kubectl exec -it mongodb-1 -- bash
:I have no name!@mongodb-1:/$ mongo MongoDB shell version v4.4.13 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("ad8102f6-931b-469c-a71e-51cba3bad370") } MongoDB server version: 4.4.13 --- The server generated these startup warnings when booting: 2022-06-22T00:22:00.054+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem --- --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > rs.status() { "ok" : 0, "errmsg" : "no replset config has been received", "code" : 94, "codeName" : "NotYetInitialized" }
Additional information
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-200-37-89.ec2.internal Ready <none> 26h v1.22.9-eks-810597c 10.200.37.89 <none> Amazon Linux 2 5.4.196-108.356.amzn2.x86_64 containerd://1.4.13 ip-10-200-38-202.ec2.internal Ready <none> 26h v1.22.9-eks-810597c 10.200.38.202 <none> Amazon Linux 2 5.4.196-108.356.amzn2.x86_64 containerd://1.4.13
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27s default-scheduler Successfully assigned default/mongodb-0 to node-172-30-34-219
Normal Pulling 26s kubelet Pulling image "mongodb:4.4.13-debian-10-r48"
Normal Pulled 19s kubelet Successfully pulled image "mongodb:4.4.13-debian-10-r48" in 7.004653263s
Normal Created 19s kubelet Created container mongodb
Normal Started 19s kubelet Started container mongodb
Warning Unhealthy 7s kubelet Readiness probe failed: /bitnami/scripts/readiness-probe.sh: line 9: mongosh: command not found
[root@master-172-30-35-53 charts]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mongodb-0 0/1 Running 0 33s
mongodb-arbiter-0 1/1 Running 0 33s
version of mongodb:4.4.13-debian-10-r48 is also not working
version of mongodb:4.4.13-debian-10-r48 is also not working
Seems like you forgot to modify readiness probes script as mentioned above:
i am also affected, should I use 4.4.13-debian-10-r48 or wait for a fix?
@macrozone If you are not able to get it working with newer versions, you could try the suggestions made by @eugene-marchanka. If not, you could go back to the old version for now, until the issue is fixed.
@macrozone If you are not able to get it working with newer versions, you could try the suggestions made by @eugene-marchanka. If not, you could go back to the old version for now, until the issue is fixed.
I don't see where @eugene-marchanka did suggest a workaround, but I might need to look more carefully.
i am also trying setting it up with authentication, but i honestly don't understand how replicaSetKey works or particulary how to set it up. I understand that it points to a keyfile which is used by the replicaset members to authenticate, but I don't know how to set it up in the chart. Its unfortunalty not documented in this repo.
anyone can point me to some docs how to set it up properly?
@macrozone It refers to the MONGODB_REPLICA_SET_KEY
environment variable for the container. It should be a string of more than 5 characters, without special characters.
@macrozone It refers to the
MONGODB_REPLICA_SET_KEY
environment variable for the container. It should be a string of more than 5 characters, without special characters.
thank you. I assume this will be used the first time install the chart?
I tried multiple times, wiped the volume, but its just stuck in a crashloop:
(sorry log is a bit crippeled, i just copied it from stackdriver and tried to remove the non-relevant stuff like timestamps)
** Starting MongoDB setup **
Validating settings in MONGODB_* env vars...
Initializing MongoDB...
Enabling authentication...
Deploying MongoDB with persisted data...
Writing keyfile for replica set authentication...
** MongoDB setup finished! **
{}
** Starting MongoDB **
{}
{"c":"CONTROL", "ctx":"-", "id":20698, "msg":"***** SERVER RESTARTED *****", "s":"I", "t":{…}}
{"c":"CONTROL", "ctx":"-", "id":23285, "msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'", "s":"I", "t":{…}}
{"attr":{…}, "c":"NETWORK", "ctx":"-", "id":4915701, "msg":"Initialized wire specification", "s":"I", "t":{…}}
{"c":"ASIO", "ctx":"main", "id":22601, "msg":"No TransportLayer configured during NetworkInterface startup", "s":"W", "t":{…}}
{"c":"NETWORK", "ctx":"main", "id":4648601, "msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.", "s":"I", "t":{…}}
{"attr":{…}, "c":"ACCESS", "ctx":"main", "id":20254, "msg":"Read security file failed", "s":"I", "t":{…}}
{"c":"ASIO", "ctx":"main", "id":22582, "msg":"Killing all outstanding egress activity.", "s":"I", "t":{…}}
{"attr":{…}, "c":"CONTROL", "ctx":"main", "id":20575, "msg":"Error creating service context", "s":"F", "t":{…}}
there is this one "Read security file failed"
EDIT: nevermind, the i used the log tool wrong, I see the error now (invalid char)
Hi,
A new image has been released that includes a fix for this. (5.0.12-debian-11-r4
, 6.0.1-debian-11-r11
)
Could you give it a try ?
I am closing this issue. Please, if you encounter further problems don't hesitate to reopen this issue.
Sorry for delay.
I can confirm issue is resolved
Thank you very much! 🙏🏻