postgres-backup-s3 icon indicating copy to clipboard operation
postgres-backup-s3 copied to clipboard

Erratic behavior with R2 API

Open luizkowalski opened this issue 1 year ago • 0 comments

I have the following configuration:

backups:
  image: eeshugerman/postgres-backup-s3:16
  host: accessories
  env:
    clear:
      SCHEDULE: "@daily"
      BACKUP_KEEP_DAYS: 7
      S3_ENDPOINT: https://xxx.eu.r2.cloudflarestorage.com/sumiu-files
      S3_PREFIX: backup
      S3_REGION: auto
      S3_BUCKET: pg_backups
      POSTGRES_HOST: 10.0.0.3
      POSTGRES_DATABASE: sumiu_production
      POSTGRES_USER: postgres
    secret:
      - POSTGRES_PASSWORD
      - S3_ACCESS_KEY_ID
      - S3_SECRET_ACCESS_KEY

This is deployed with Kamal, btw

When I run the backup command like this: kamal accessory exec backups "sh backup.sh"

I get this error

docker stdout: Creating backup of sumiu_production database...
Uploading backup to pg_backups...
upload: ./db.dump to s3://pg_backups/backup/sumiu_production_2024-01-29T17:09:38.dump
Backup complete.
Removing old backups from pg_backups...
docker stderr: An error occurred (NoSuchKey) when calling the ListObjects operation: The specified key does not exist.

I was using S3 but I'm trying to change to cloudlfare's R2. My first suspicion was that it had some kind of persistence and tried to delete some "known" backup that exists on S3 but not on R2 but checking the script doesn't look like it; it looks more like an inconsistency on the API level, where S3 returns nothing, and R2 returns an error.

Do you think something can be done on the removal part of the script? If the query is empty, instead of pipe it aws $aws_args, just skip it.

It is worth noting that even though the command failed, the backup is there on R2, so I guess this will stop failing in 7 days

btw, thanks for this gem of a project, really nice!

luizkowalski avatar Jan 29 '24 17:01 luizkowalski