[bitnami/mongodb] Setting the service port doesn't work
Name and Version
bitnami/mongodb version=12.1.31
What steps will reproduce the bug?
- Running this version of kubectl, minikube, DockerDesktop
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version short
minikube version: v1.26.1
commit: 62e108c3dfdec8029a890ad6d8ef96b6461426dc
Docker Desktop 4.11.1 (84025)
-
My config is: values.yaml.txt
-
I deploy into my minikube like t his:
$ ./install_plain_mongo.sh
+ helm install --create-namespace --set service.ports.mongodb=11920 --values values.yaml --namespace pcloud mongodb mongodb-12.1.31.tgz --wait
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/tim/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/tim/.kube/config
NAME: mongodb
LAST DEPLOYED: Mon Sep 19 14:22:03 2022
NAMESPACE: pcloud
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 12.1.31
APP VERSION: 5.0.10
** Please be patient while the chart is being deployed **
MongoDB® can be accessed on the following DNS name(s) and ports from within your cluster:
mongodb-0.mongodb-headless.pcloud.svc.cluster.local:11920
mongodb-1.mongodb-headless.pcloud.svc.cluster.local:11920
mongodb-2.mongodb-headless.pcloud.svc.cluster.local:11920
To get the root password run:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace pcloud mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
To connect to your database, create a MongoDB® client container:
kubectl run --namespace pcloud mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:5.0.10-debian-11-r3 --command -- bash
Then, run the following command:
mongosh admin --host "mongodb-0.mongodb-headless.pcloud.svc.cluster.local:11920,mongodb-1.mongodb-headless.pcloud.svc.cluster.local:11920,mongodb-2.mongodb-headless.pcloud.svc.cluster.local:11920" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
Note that I turned off metrics due to the issue here: https://github.com/bitnami/charts/issues/10264
- I port forward the changed service port to connect with Compass to look at the database:
$ k -n pcloud port-forward mongodb-0 11920:11920
Forwarding pod mongodb-0 on port 11920:11920..
Forwarding from 127.0.0.1:11920 -> 11920
Forwarding from [::1]:11920 -> 11920
- I then bring up Compass UI to connect to the database with this url:
mongodb://root:*****@127.0.0.1:11920/?authSource=admin&readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false
- I see this failure on the port forward when it tries to forward the Compass connection request:
Handling connection for 11920
E0919 14:23:47.100705 148030 portforward.go:406] an error occurred forwarding 11920 -> 11920: error forwarding port 11920 to pod 7144cbe071491e3276c9881bd5cac754fe2795e2e33412cfc43b5c6122534588, uid : exit status 1: 2022/09/19 18:23:47 socat[857096] E connect(5, AF=2 127.0.0.1:11920, 16): Connection refused
E0919 14:23:47.100950 148030 portforward.go:234] lost connection to pod
Are you using any custom parameters or values?
I am passing in the --set service.ports.mongodb=11920
What is the expected behavior?
The expected behavior is that with the service port port-forwarded Compass can connect successfully.
What do you see instead?
I see Compass hang and this in the output from the port-forward command:
Handling connection for 11920
E0919 14:23:47.100705 148030 portforward.go:406] an error occurred forwarding 11920 -> 11920: error forwarding port 11920 to pod 7144cbe071491e3276c9881bd5cac754fe2795e2e33412cfc43b5c6122534588, uid : exit status 1: 2022/09/19 18:23:47 socat[857096] E connect(5, AF=2 127.0.0.1:11920, 16): Connection refused
E0919 14:23:47.100950 148030 portforward.go:234] lost connection to pod
Additional information
I want to do this to meet new security requirements to not use the default service port of 27017 in mongodb for our cloud services.
Hi,
It seems you are executing port-forward for the pod itself. Note that the service.ports will change the service port, not the ports inside the container. For that to work you would need to modify the containerPorts section.
I did port forward just to demonstrate that the mongodb could not be accessed via the new port. My services cannot access the mongodb service with the new port number as well. Let me review my services to double check how they do this and get back with an example of a service not working to access mongodb configured this way.
So I set the containerPorts:mongodb=11290 and that appeared to solve my problem with my services using that port for access. To get it all working I had to do custom liveness and readiness probes for the mongodb container and I had to provide metrics.args to correct the hardcoded 27017 port reference in the mongodb.url argument in the original chart implementation. I will post the override details as a followup so that others can see how to deal with this.
As @javsalgar stated with --set service.ports.mongodb=11920 you are changing the port of the service but here k -n pcloud port-forward mongodb-0 11920:11920 you are forwarding the pod port, that's the reason why you need to change the containter port
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.