Results 27 comments of Nicola Urbinati

I'll point to my comment here, seems to be really related: [Issue #952](https://github.com/kubereboot/kured/issues/952#issuecomment-2424694504) Any solution?

Update: Connecting to the single node's Fauxton GUI, and doing a manaul check (connect to node 1, create db, connect to node 2, check db presence), the replication seems to...

Hi, It's behind a proxy (traefik), on couchdb service port 5984. I have the same exact problem if I reach the GUI via `kubectl port-forward -n couchdb service/couchdb-couchdb 5984`

Hi, sorry for being late in answering. So, the three couchdb instances are actually replicated between them, even if the GUI check fails? Thank you.

As a note: cloning the repo and installing the chart from local fixed the problem, so probably the helm install from remote has problems?

No, they should be added in the servicemonitor templates (the one you pointed to for example) in the metadata section, something like this: Template: ```yaml metadata: name: {{ .Release.Name }}-agent-service...

I can confirm that with the very same settings in che values (no Selector line), the two pvcs get different, as the alertmanager's one has a Selector {} spec not...

Even more: the volumeClaimTemplates in the sts (prometheus, alertmanager) seem to be the same... ``` volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: prometheus-kube-prometheus-stack-prometheus-db spec: accessModes: - ReadWriteMany...

Last thing I can say, other than this seems to be a chart problem, not actually longhorn-related, is that I solved like this: - deleted the failing pvc - rollout...

Also: after the first instalaltion, pvc deletion and sts restart, if I uninstall the chart and install it again, it works... that's quite strange... I did nothig in the meanwhile...