[bitnami/mongodb] Can't communicate with replicaset using external access
Name and Version
bitnami/mongodb 13.3.1
What steps will reproduce the bug?
I have deployed a mongodb replicaset (v6.0.2) using this chart: bitnami/mongodb 13.3.1 I have enabled external access with SVC type LoadBalancer. I have my DNS records pointing to each external SVC IP.
- mongodb-0.dev -> 10.246.50.1
- mongodb-1.dev -> 10.246.50.2
- mongodb-2.dev -> 10.246.50.3
$ kubectl get pod,svc -owide
NAME READY STATUS
pod/mongodb-0 2/2 Running
pod/mongodb-1 2/2 Running
pod/mongodb-2 2/2 Running
pod/mongodb-arbiter-0 2/2 Running
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/mongodb-0-external LoadBalancer 10.43.182.101 10.246.50.1 27017:30727/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-0
service/mongodb-1-external LoadBalancer 10.43.130.10 10.246.50.2 27017:30137/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-1
service/mongodb-2-external LoadBalancer 10.43.86.149 10.246.50.3 27017:32246/TCP 6m6s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb,statefulset.kubernetes.io/pod-name=mongodb-2
service/mongodb-arbiter-headless ClusterIP None <none> 27017/TCP 8m51s app.kubernetes.io/component=arbiter,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
service/mongodb-headless ClusterIP None <none> 27017/TCP 8m51s app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
I run a docker from OUTSIDE to test the client:
docker run --rm -it -v docker.io/bitnami/mongodb:6.0.2-debian-11-r1 bash
Option 1: Direct connection from OUTSIDE
I CAN connect using:
mongosh mongodb://mongodb-0.dev:27017 --authenticationDatabase admin -u root -p root1234
Connecting to: mongodb://<credentials>@mongodb-0.dev:27017/?directConnection=true&authSource=admin
rs0 [direct: primary] test>
Option 2: Replica Set Connection from OUTSIDE
I CAN'T connect using:
mongosh mongodb://mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017?replicaSet=rs0 --authenticationDatabase admin -u root -p root1234
Connecting to: mongodb://<credentials>@mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017/?replicaSet=rs0&authSource=admin&appName=mongosh+1.6.0
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-0.mongodb-headless.mongodb.svc.cluster.local
Of course, If I add the DNS records it works (pay attention at the prompt)::
- mongodb-0.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.1
- mongodb-1.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.2
- mongodb-2.mongodb-headless.mongodb.svc.cluster.local -> 10.246.50.3
Connecting to: mongodb://<credentials>@mongodb-0.dev:27017,mongodb-1.dev:27017,mongodb-2.dev:27017/?replicaSet=rs0&authSource=admin
rs0 [primary] test>
But I don't want to do that workaround at DNS level, of course it's wrong! I also don't want to harcode any IP on /etc/hosts
Option 3: From INSIDE the k8s cluster
This is just extra information in case it helps to find the issue. I CAN connect using:
kubectl run mongo-client-6 --rm -ti --image=docker.io/bitnami/mongodb:6.0.2-debian-11-r1 -- bash
mongosh --host mongodb-headless --authenticationDatabase admin -u root -p root1234
Summary: I want to connect using the Option 2 (replica set from OUTSIDE)
Are you using any custom parameters or values?
architecture: replicaset
replicaCount: 3
externalAccess:
enabled: true
autoDiscovery:
enabled: true
auth:
enabled: true
rootUser: root
rootPassword: "root1234"
replicaSetKey: "adfaerfeqawfdasefa"
What do you see instead?
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-0.mongodb-headless.mongodb.svc.cluster.local
Hi Carim
Have you tried upgrading your chart setting externalAccess.service.loadBalancerIPs with the external IPs?
Hi Carim
Have you tried upgrading your chart setting
externalAccess.service.loadBalancerIPswith the external IPs?
Yes, I tried everything. It didn't work.
I was trying to reproduce your issue in version 13.3.1 but I faced some issues (#13201) already fixed in latest version (13.5.0 at the time of this comment). Could you try latest version enabling rbac for autodiscovery?
architecture: replicaset
replicaCount: 3
rbac:
create: true
externalAccess:
enabled: true
autoDiscovery:
enabled: true
auth:
enabled: true
rootUser: root
rootPassword: "root1234"
replicaSetKey: "adfaerfeqawfdasefa"
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
@cmanzur did you ever get this to work externally? I have the same problem as you. works 100% inside Kubernetes, but external access does not. I get these errors:
Name or service not known,mongodb5-2.mongodb5-headless.mongodb-dev.svc.cluster.local:27017: [Errno -2]
So from the looks of it will never word externally unless you hack the /etc/hosts
@cmanzur did you ever get this to work externally? I have the same problem as you. works 100% inside Kubernetes, but external access does not. I get these errors:
Name or service not known,mongodb5-2.mongodb5-headless.mongodb-dev.svc.cluster.local:27017: [Errno -2]So from the looks of it will never word externally unless you hack the /etc/hosts
Hello, I was trying to connect enabling external access and rbac settiings, so when I include the parameter directConnection in the connection string I can connect locally:
mongodb://root:root@localhost:27017/?directConnection=true
Regards,
Fabri
The issue still persists. I found out that replicaset initialised with names like 192.168.0.7:30001 (where 192.168.0.7 is my internal host ip and 30001 is nodePort. It possible to connect externally with directConnection=true setting FQDN or ip only once. In case of that, when you try to connect to cluster without directConnection your external client trying to connect to all replicas in set at once right after connection init and fails, as of 192.168.0.7 ip available only in cluster network. I guess that MONGODB_ADVERTISED_HOSTNAME and following lines is a culprit:
https://github.com/bitnami/charts/blob/973a2792e0bc5967e3180c6d44eebf223b9f1d83/bitnami/mongodb/templates/replicaset/statefulset.yaml#L202-L205
values:
architecture: replicaset
replicaCount: 2
externalAccess:
enabled: true
service:
type: NodePort
port: 27017
nodePorts:
- 30001
- 30002
publicNames:
- sub.domain.com
- sub.domain.com
domain: "sub.domain.com"
externalTrafficPolicy: Cluster
members: [
{
_id: 0,
name: '192.168.0.7:30001',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 586,
optime: [Object],
optimeDate: 2024-09-28T22:21:37.000Z,
lastAppliedWallTime: 2024-09-28T22:21:37.619Z,
lastDurableWallTime: 2024-09-28T22:21:37.619Z,
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1727561547, i: 1 }),
electionDate: 2024-09-28T22:12:27.000Z,
configVersion: 4,
configTerm: 8,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: '192.168.0.6:30002',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 562,
optime: [Object],
optimeDurable: [Object],
optimeDate: 2024-09-28T22:21:37.000Z,
optimeDurableDate: 2024-09-28T22:21:37.000Z,
lastAppliedWallTime: 2024-09-28T22:21:37.619Z,
lastDurableWallTime: 2024-09-28T22:21:37.619Z,
lastHeartbeat: 2024-09-28T22:21:39.875Z,
lastHeartbeatRecv: 2024-09-28T22:21:40.286Z,
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: '192.168.0.7:30001',
syncSourceId: 0,
infoMessage: '',
configVersion: 4,
configTerm: 8
}
],```
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.