helm-charts
helm-charts copied to clipboard
[runix/pgadmin4] Pgadmin4 keeps asking for database password
Describe the bug When I want to connect to a database server the application keeps asking me for a database password even though it is configured.
Version of Helm and Kubernetes: Helm: v3.2.1 Kubernetes: v1.17.13
Which chart: runix/pgadmin4 version: pgadmin4-1.4.6 app version: 4.29.0
What happened: I have a pgpassfile secret:
k describe secret pgpassfile
Name: pgpassfile
Namespace: flexvoucher-dev
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
pgpassfile: 99 bytes
pgadmin % k get secret pgpassfile -o yaml
apiVersion: v1
data:
pgpassfile: <base64 encoded content>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"pgpassfile","namespace":"flexvoucher-dev"},"stringData":{"pgpassfile":"flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:vksx28n65nlt03q5\n"},"type":"Opaque"}
creationTimestamp: "2021-01-19T14:18:47Z"
name: pgpassfile
namespace: flexvoucher-dev
resourceVersion: "40453082"
selfLink: /api/v1/namespaces/flexvoucher-dev/secrets/pgpassfile
uid: a884e2ed-b713-4a2c-a12e-8067aae67889
type: Opaque
pgadmin % echo <base64 encoded content> | base64 -d
flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:MySecretPassword
In values.yaml I defined my server:
serverDefinitions:
enabled: true
servers: |-
"1": {
"Name": "flexVOUCHER",
"Group": "Servers",
"Port": 25060,
"Username": "doadmin",
"Host": "flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com",
"SSLMode": "prefer",
"MaintenanceDB": "defaultdb"
}
I have env.pgpassfile defined:
env:
# can be email or nickname
email: [email protected]
password: SuperSecret
pgpassfile: /var/lib/pgadmin/storage/pgadmin/file.pgpass
VolumePermissions:
## If true, enables an InitContainer to set permissions on /var/lib/pgadmin.
##
enabled: true
extraInitContainers: |
- name: add-folder-for-pgpass
image: "dpage/pgadmin4:4.23"
command: ["/bin/mkdir", "-p", "/var/lib/pgadmin/storage/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050
The secret above is mounted as PV:
extraSecretMounts:
- name: pgpassfile
secret: pgpassfile
subPath: pgpassfile
mountPath: "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
readOnly: true
With the above configuration the server is configured and is presented in the left panel, but when I try to connect to the server I must provide the password:
What you expected to happen: I would expect not to be asked for password.
Anything else we need to know: Below some data from the deployment. All seems to be ok IMO.
pgadmin % k get pods | grep my-pgadmin
my-pgadmin4-bf544d96f-v68cg 1/1 Running 0 10m
pgadmin % k describe pod my-pgadmin4-bf544d96f-v68cg
Name: my-pgadmin4-bf544d96f-v68cg
[...]
Mounts:
/pgadmin4/servers.json from definitions (rw,path="servers.json")
/var/lib/pgadmin from pgadmin-data (rw)
/var/lib/pgadmin/storage/pgadmin/file.pgpass from pgpassfile (ro,path="pgpassfile")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmn8x (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-pgadmin4
ReadOnly: false
pgpassfile:
Type: Secret (a volume populated by a Secret)
SecretName: pgpassfile
Optional: false
definitions:
Type: Secret (a volume populated by a Secret)
SecretName: my-pgadmin4
Optional: false
default-token-kmn8x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kmn8x
Optional: false
[...]
The volumes are there. Lets see the details:
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- cat /pgadmin4/servers.json
{
"Servers": {
"1": {
"Name": "flexVOUCHER",
"Group": "Servers",
"Port": 25060,
"Username": "doadmin",
"Host": "flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com",
"SSLMode": "prefer",
"MaintenanceDB": "defaultdb"
}
}
}%
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- cat /var/lib/pgadmin/storage/pgadmin/file.pgpass
flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:MySecretPassword
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- env | grep PGPASSFILE
PGPASSFILE=/var/lib/pgadmin/storage/pgadmin/file.pgpass
I can confirm this bug. pgpass file is mounted with wrong permissions. The secret file /var/lib/pgadmin/storage/pgadmin/file.pgpass is mounted as 0640 and it needs to be 0600.
What @gorlok said, please adjust the permissions of your file and it should function.
Actually, this is harder then I thought - cause the file is Read-only. Let me see if I can find a way that would look good.
Really sorry, I do not know a clean way around this. The only thing I did notice is that you forgot to add PassFile
within your servers.json.
No, the PassFile in servers.json is ok. https://www.postgresql.org/docs/8.2/libpq-pgpass.html "The permissions on .pgpass must disallow any access to world or group; achieve this by the command chmod 0600 ~/.pgpass. If the permissions are less strict than this, the file will be ignored. (The file permissions are not currently checked on Microsoft Windows, however.) "
I found a workaround. I added a lifecycle postStart to the pod. I copy the file to /tmp, fix permission's file (0600), and point to that file at configuration. That works ok.
But sadly, I needed to fork, because InitContainers didn't work for that. If you can add some way to configure a lifecycle/postStart, I can use upstream chart again.
Hi guys, I've been playing with this for some time too and found a workaround.
According to the documentation, pgAdmin executes 4 configuration scripts: 1. config.py; 2. config_distro.py; 3. config_local.py; 4. config_system.py
Docker image is a bit customized, it seems they store those config scripts inside of /pgadmin4
direcotry. However if you look at the end of /pgadmin4/config.py
it's looking for a config_system.py
in a different (non existing) location: /etc/pgadmin/config_system.py
. This script is executed during setup and is intended to be run for system adjustments. So from my point, this is the place where permissions for certificates and passwords should be configured. Just make somehow sure, your setup script is not run repeatedly. It seems it's called more than once.
Here is my setup, if it would help someone: Directory structure:
pgAdmin
├── config
│ ├── config.sh
│ ├── config_system.py
│ └── secrets
│ ├── cert
│ │ ├── client.crt
│ │ ├── client.key
│ │ └── ca.crt
│ └── dbserver_password
├── servers.json
└── storage
└── pgadmin_lala.la
docker-compose.yml:
pgAdmin:
image: dpage/pgadmin4
environment:
- [email protected]
- PGADMIN_DEFAULT_PASSWORD=pwd
volumes:
- ./pgAdmin/servers.json:/pgadmin4/servers.json:ro
- ./pgAdmin/storage/:/var/lib/pgadmin/storage/
- ./pgAdmin/config/secrets/:/secrets/:ro
# Execute Configuration scripts
- ./pgAdmin/config/config_system.py:/etc/pgadmin/config_system.py
- ./pgAdmin/config/config.sh:/config/config.sh
config_system.py:
import subprocess
subprocess.Popen(['sh', '/config/config.sh']).wait()
config.sh:
#!/bin/bash
#Configure this image, set correct permissions etc.
SECRETS_ROOT_PATH="/var/lib/pgadmin/storage/pgadmin_lala.la"
SECRETS_PATH="$SECRETS_ROOT_PATH/secrets"
if [ ! -d $SECRETS_PATH ]; then
cp -R /secrets $SECRETS_ROOT_PATH
find $SECRETS_PATH -type d -exec chmod 755 {} \;
find $SECRETS_PATH -type f -exec chmod 600 {} \;
fi
I got it working with this hack:
Chart version: 1.8.2
values.yml
...
serverDefinitions:
enabled: true
# PassFile pass is relative to the storage dir - /var/lib/pgadmin/storage/<user-email-like-dir>
servers: |
"1": {
"Group": "Servers",
"Name": "Primary Postgres",
"Port": 5432,
"Username": "postgres",
"Host": "postgres-postgresql.db.svc.cluster.local",
"PassFile": "../../pgpass",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
# Set proper rights for the pgpass file (and move closer to storage dir)
extraSecretMounts:
- name: k8s-pgpass
secret: k8s-pgpass
mountPath: /pgpass
extraInitContainers: |
- name: prepare-pgpass
image: docker.io/dpage/pgadmin4:6.2
command: [ 'sh', '-c', "cp /pgpass /var/lib/pgadmin/pgpass && chown pgadmin:pgadmin /var/lib/pgadmin/pgpass && chmod 600 /var/lib/pgadmin/pgpass " ]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
- name: k8s-pgpass
subPath: pgpass
mountPath: /pgpass
securityContext:
runAsUser: 0
Hi, So, there still is no resolution without hacks?
It looks like this is related to https://github.com/kubernetes/kubernetes/issues/57923, so I think there is nothing we can do except the already mentioned workarounds until that issue is resolved.