postgres-operator
postgres-operator copied to clipboard
v5.0.5 - expired backups not removed from s3 repo
Looking at pgbackrest info, backups seem to expire correctly, but are not removed from s3 repo. I've created a single node cluster, set expiration policy for full backup to count=2, then backed up manually a few times. pgbackrest info shows just two backup sets (as expected), but nothing has been removed from s3 repo.
pgbackrest info (only 2 backups - as expacted accoring to expiry settings): stanza: db status: ok cipher: none
db (current)
wal archive min/max (13): 000000010000000000000006/000000010000000000000008
full backup: 20220330-095042F
timestamp start/stop: 2022-03-30 09:50:42 / 2022-03-30 09:51:47
wal start/stop: 000000010000000000000006 / 000000010000000000000006
database size: 31.3MB, database backup size: 31.3MB
repo1: backup set size: 3.9MB, backup size: 3.9MB
full backup: 20220330-101428F
timestamp start/stop: 2022-03-30 10:14:28 / 2022-03-30 10:15:21
wal start/stop: 000000010000000000000008 / 000000010000000000000008
database size: 31.3MB, database backup size: 31.3MB
repo1: backup set size: 3.9MB, backup size: 3.9MB
On S3 storage (2 expired backups still exist): s3cmd ls s3://DEV/db-dev/db1-test/repo1/backup/db/ DIR s3://DEV/db-dev/db1-test/repo1/backup/db/20220330-093842F/ DIR s3://DEV/db-dev/db1-test/repo1/backup/db/20220330-094022F/ DIR s3://DEV/db-dev/db1-test/repo1/backup/db/20220330-095042F/ DIR s3://DEV/db-dev/db1-test/repo1/backup/db/20220330-101428F/ DIR s3://DEV/db-dev/db1-test/repo1/backup/db/backup.history/ 2022-03-30 09:38 0 s3://DEV/db-dev/db1-test/repo1/backup/db/ 2022-03-30 10:15 1590 s3://DEV/db-dev/db1-test/repo1/backup/db/backup.info 2022-03-30 10:15 1590 s3://DEV/db-dev/db1-test/repo1/backup/db/backup.info.copy
pgbackrest part in cluster's manifest: backups: pgbackrest: image: host/crunchydata/crunchydata/crunchy-pgbackrest:centos8-2.36-1 configuration: - secret: name: pgo-s3-creds global: repo1-retention-full: "2" repo1-retention-full-type: count repo1-retention-diff: "2" repo1-path: /DEV/db-dev/db1-test/repo1 repo1-s3-uri-style: host repo1-storage-verify-tls: "n" manual: repoName: repo1 options: - --type=full restore: enabled: true repoName: repo1 options: - --type=time - --target="2022-03-25 09:20:14" repos: - name: repo1 schedules: full: "0 23 * * 6" incremental: "0 22 * * 0-5" s3: bucket: "test" endpoint: "host.local" region: "None" jobs: resources: limits: cpu: "0.1" memory: "32Mi" repoHost: resources: limits: cpu: "0.1" memory: "32Mi" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - dc1 sidecars: pgbackrest: resources: limits: cpu: "0.1" memory: "32Mi"
Additional info - it looks, like directory contents on s3 is removed, but catalogue structrure remains. Is this a correct behaviour ?
Hello, Because (a) I cannot replicate this behavior and (b) the pgbackrest code looks like it should remove the storage path, not just the contents, this seems like unexpected behavior, but since I can't replicate it, I wonder if it might be in the env.
(Also, just to be safe, you probably want to switch spec.backups.pgbackrest.restore.enabled
to false
.)
Are you still experiencing this issue? (And are you still using host.local
as the endpoint?)
Closing since we are unable to replicate this issue.