couchdb-helm
couchdb-helm copied to clipboard
admin hashes get regenerated on pod restart
The CouchDB Dockerfiles lay down an [admins]
section in /opt/couchdb/etc/local.d/docker.ini
here. The Helm chart is currently configured such that /opt/couchdb/etc/default.d
is persistent but /opt/couchdb/etc/local.d
is not.
This results in regeneration of the admin hashes whenever a CouchDB pod is restarted, invalidating any session cookies and leading to inconsistent auth failures with cookies are used.
I think the safest thing is likely to just make /opt/couchdb/etc/local.d
persistent as well; the Dockerfile will already skip laying down a new [admins]
section if one is present.
cc @kocolosk
Hmm, the source of truth for default.d
is the ConfigMap while local.d
has no underlying source of truth.
Are you thinking to use a PV to make local.d
persistent? Would you just reuse a sub path of the existing one for the DB files?
It's not ideal but I think reusing a subpath of the existing db file PV would be simplest option, yes.
The issue with inconsistent cookie auth between nodes is also only solved if the _cluster_setup
step synchronises the admin hashes in a persistent fashion; it's unclear to me whether that's the case.
https://github.com/apache/couchdb-helm/pull/26 provides a workaround by allowing users to specify a hash at deploy time