feat(cluster): allow using existing secret for backup and restore
Closes #197
@phisco I've made your review slightly more complicated as this is the base for my work on adding support for replica clusters and I've fixed and restructured the credential secrets.
Hi, I merged this branch into main in my fork, as I needed the feature, and I noticed a few issues:
- There are problematic
-}}at a few places places in_barman_object_store.tplin the handling ofdestinationPathand$secretName; it doesn't work as is, and moreover it complicates things, like having to do{{ " destinationPath: \"s3://" }}instead of justdestinationPath: "s3://. - In
_backup.tplyou pass asecretPrefixtobarmanObjectStoreConfig, but in_barman_object_store.tplyou expect asecretSuffix.
@gpothier Have you fixed these in your fork? Can I take a look and/or steal your proposal?
Sure, I just committed it to my fork: https://github.com/gpothier/cnpg-charts/commit/fbdb794f384cfff84f583ce6b1ca62091cb0c2bf
I only fixed the yaml formatting stuff (using }} instead of -}}, as whitespace is part of YAML syntax).
I don't know what your intent is regarding these prefix/suffix things so I did not touch that.
I also added quote to endpointURL as I thought this was part of the problem; it wasn't, but I think it's better to quote in general anyway, and in particular for URLs that usually contain : signs that are not very safe to use in unquoted YAML values. I think it might be a good idea to have a look at other values that are not quoted. See https://helm.sh/docs/chart_template_guide/functions_and_pipelines/:
Let's start with a best practice: When injecting strings from the .Values object into the template, we ought to quote these strings.
Hi, have you been able to take a look at my suggested fix for this? Do you want me to create another PR with my changes?
I'm super overwhelmed with work right now, so yhea, I would really appreciate any help.
@phisco can you give us a hand here? :D
@Cr4mble Thanks a lot! I would have totally missed that!
@itay-grudev can confirm that the backup to s3 is now working with the current state of the helm chart. The wal files are created in the specific s3 bucket. I used the following values for testing (some are default values):
backups:
enabled: true
provider: s3
secret:
create: false
name: "cloudnative-pg-s3-creds"
s3:
region: "eu-central-1"
bucket: "testing-cloudnative-pg-cluster-helm-chart"
path: ""
scheduledBackups:
-
name: daily-backup
schedule: "0 0 0 * * *"
backupOwnerReference: self
method: barmanObjectStore
@Cr4mble Thanks a lot.
Thanks a bunch for this guys. Really love the collaboration here :)