helm-charts
helm-charts copied to clipboard
Unable to load servers.json if oauth2 enabled only
Hello, we tried with your chart and for some reason unable to load pre-configured db definitions from servers.json to all users . If I enabled internal mode it works but only for initial user.
Would be nice to have option load servers.json for oauth2 users.
Can you perhaps post your configuration so I can take a look at how you've set it up and what might be incorrect?
Hi @rowanruseler. Here is our config.
apiVersion: v1
metadata:
name: pgadmin4-config
type: Opaque
stringData:
config_local.py: |-
MASTER_PASSWORD_REQUIRED = True
AUTHENTICATION_SOURCES = ['oauth2']
OAUTH2_AUTO_CREATE_USER = True
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'Azure',
'OAUTH2_DISPLAY_NAME': '******',
'OAUTH2_CLIENT_ID': '*******',
'OAUTH2_CLIENT_SECRET': '*********',
'OAUTH2_TOKEN_URL': '******/oauth2/v2.0/token',
'OAUTH2_AUTHORIZATION_URL': '*******oauth2/v2.0/authorize',
'OAUTH2_API_BASE_URL': '*******/oauth2/v2.0/',
'OAUTH2_USERINFO_ENDPOINT': 'https://graph.microsoft.com/oidc/userinfo',
'OAUTH2_ICON': 'fa-database',
'OAUTH2_BUTTON_COLOR': '#0000ff',
'OAUTH2_SCOPE': 'openid'
}
]
---
apiVersion: v1
kind: Secret
metadata:
name: pgpassfile
type: Opaque
stringData:
# https://www.postgresql.org/docs/9.4/libpq-pgpass.html
pgpassfile: |
server.uat:#{password.uat}#
server.dev:#{password.dev}#
server.prod:#{password.prod}#```
replicaCount: 1
## pgAdmin4 container image
##
image:
registry: docker.io
repository: dpage/pgadmin4
tag: "5.7"
pullPolicy: IfNotPresent
## Deployment annotations
annotations: {}
service:
type: ClusterIP
port: 80
targetPort: http
# targetPort: 4181 To be used with a proxy extraContainer
portName: http
annotations: {}
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
name: ""
strategy: {}
serverDefinitions:
## If true, server definitions will be created
##
enabled: true
servers: |-
"1": {
"Name": "UAT",
"Group": "Servers",
"Port": 5432,
"Username": "db",
"Host": "server.uat",
"SSLMode": "prefer",
"MaintenanceDB": "db",
"PassFile": "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
},
"2": {
"Name": "DEV",
"Group": "Servers",
"Port": 5432,
"Username": "db",
"Host": "server.dev",
"SSLMode": "prefer",
"MaintenanceDB": "db",
"PassFile": "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
}
networkPolicy:
enabled: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: host.net
paths:
- path: /
pathType: Prefix
tls:
- secretName: pgadmin-tls
hosts:
- pgadmin.host.net
# Additional config maps to be mounted inside a container
# Can be used to map config maps for sidecar as well
extraConfigmapMounts: []
extraSecretMounts:
- name: config-local
secret: pgadmin4-config
subPath: config_local.py
mountPath: "/pgadmin4/config_local.py"
readOnly: true
- name: pgpassfile
secret: pgpassfile
subPath: pgpassfile
mountPath: "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
readOnly: true
existingSecret: ""
env:
# can be email or nickname
pgpassfile: /var/lib/pgadmin/storage/pgadmin/file.pgpass
email: [email protected]
enhanced_cookie_protection: "False"
persistentVolume:
enabled: true
annotations: {}
accessModes:
- ReadWriteOnce
size: 10Gi
securityContext:
runAsUser: 5050
runAsGroup: 5050
fsGroup: 5050
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 15
successThreshold: 1
failureThreshold: 3
VolumePermissions:
## If true, enables an InitContainer to set permissions on /var/lib/pgadmin.
##
enabled: true
## Additional InitContainers to initialize the pod
##
extraInitContainers: |
- name: add-folder-for-pgpass
image: "dpage/pgadmin4:4.23"
command: ["/bin/mkdir", "-p", "/var/lib/pgadmin/storage/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050
containerPorts:
http: 80
resources:
limits:
cpu: 300m
memory: 500Mi
requests:
cpu: 150m
memory: 400Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
## Pod affinity
##
affinity: {}
## Pod annotations
##
podAnnotations: {}
## Pod labels
##
podLabels: {}
# key1: value1
# key2: value2
init:
## Init container resources
##
resources: {}
## Define values for chart tests
test:
## Container image for test-connection.yaml
image:
registry: docker.io
repository: busybox
tag: latest
## Resources request/limit for test-connection Pod
resources: {}
securityContext:
runAsUser: 5051
runAsGroup: 5051
fsGroup: 5051
@IvanKolbasiuk you are missing Shared
key in your definitions.
servers: |-
"1": {
"Name": "UAT",
"Group": "Servers",
"Port": 5432,
"Username": "db",
"Host": "server.uat",
"SSLMode": "prefer",
"MaintenanceDB": "db",
"PassFile": "/var/lib/pgadmin/storage/pgadmin/file.pgpass",
"Shared": true
But I couldn't get username and passwords to be shared. So oauth2 users has the same server list but no username and no passwords set.
@rowanruseler do you got that part working?
Sry for ping @rowanruseler but was wondering if you had similar issues? :bow:
Yeah I am not really sure what is going wrong here. I'll try to see if I can get some time available this week to test this out and see where it goes wrong.
I am also facing the same issue .. After adding Shared: True
my Oauth2 users get prompted for both username and password, despite that they are present in the pgpass file. My configs are exactly as above.
can also attest to Oauth2 users needing db username and password
I'm seeing the same issue. It works as expected with the internal authentication but the Servers fail to load with an OAuth2 user authentication.
same here. seems like oauth2 users only have access to their homedir (/var/lib/pgadmin/storage/email_domain/ for me)
related pgadmin issue: https://github.com/pgadmin-org/pgadmin4/issues/5824
I think this issue is related to the general issue: https://github.com/rowanruseler/helm-charts/issues/193
This should be fixed in the docker 6.20 version.
This error keeps happening in 6.21 version. oauth2 users can see the server list but no username and no passwords set.
Same here with both OAuth2 and internal users on multi-user installation. I have managed to share servers via "Shared": "true" JSON variable + managed to feed a password via pgpass file, but PG username is still required for all imported servers for all pgAdmin users except the one which is importing JSON file.
If a user sets username for at least one time then it gets stored in their preferences and won't be required anymore.
I guess this has something to do with the way pgAdmin encodes usernames/passwords in a database. Maybe something related to master password.
P.S. Note that pgpass file can only be used to set the password. First four parameters in that file, including username, is used only for locating a right password, not to set in the connection string. Also note, that connection parameters changed in recent versions of pgAdmin. Be sure to set them correctly in JSON, something along the lines:
{
"Servers": {
"1": {
"Group": "Servers",
"Name": "My Server",
"Port": 5432,
"Host": "postgresql",
"Username": "postgres",
"MaintenanceDB": "postgres",
"Shared": true,
"ConnectionParameters": {
"passfile": "../../file.pgpass"
}
}
}
}
FYI: for me it only asks for username, which is related https://github.com/pgadmin-org/pgadmin4/issues/6229. And the only difference in my config to above config is that pgpass file is mounted at "/var/lib/pgadmin/file.pgpass".
@bitnik When Shared
is true
and SharedUsername
is set, the logged in oauth-user still gets asked for password. How do you provide it for the shared configuration? Passfile? How? Relative or Absolute? In which key?