Backups to Cloudflare R2 (S3 Compatible) storage fail
I'm backing up to S3 compatible storage (Cloudflare R2) from Cloud Native PG (CNPG) which uses barman-cloud-backup.
Backups are failing with an error of:
ERROR: Upload error: An error occurred (InvalidPart) when calling the CompleteMultipartUpload operation: All non-trailing parts must have the same length. (worker 1)
The above error is surfaced from barman-cloud-backup when attempting to complete the weekly full backup.
$ barman-cloud-backup --version
barman-cloud-backup 3.10.1
I've come across what seems like a similar issue when using Cloudflare R2 as a backend for a Docker registry that was resolved by setting a fixed chunk size for multipart uploads, however I don't know enough about the barman codebase to know if this is the same issue.
https://community.cloudflare.com/t/all-non-trailing-parts-must-have-the-same-length/552190/7
And this discussion here:
https://github.com/distribution/distribution/pull/3940#issuecomment-1638356120
Just an update here, I had "snappy" compression enabled for my base backups however once I changed the settings to use no compression the backups have succeeded so far.
Other backups of very small databases seem OK (ie < 100MB) but this one uncompressed is about 600MB
the same problem on R2
Could this be related to this issue: https://github.com/EnterpriseDB/barman/issues/957
We support S3-compatible object stores, and though providers outside AWS claim 100% compatibility, as you can see in the case of Linode, it's not always fully compatible.
It would be good to check this for anything Cloudflare doesn't yet support.
@andrewheberle @maxpain were you able to solve this, if you are using CNPG, with the suggestion in comment https://github.com/EnterpriseDB/barman/issues/957#issuecomment-2290933264 ?
Did you get that working in the end?
I'm seeing the following error:
{"level":"info","ts":"2025-03-09T08:42:29.595987188Z","logger":"barman-cloud-check-wal-archive","msg":"2025-03-09 08:42:29,595 [55] ERROR: Barman cloud WAL archive check exception: An error occurred (400) when calling the HeadBucket operation: Bad Request","pipe":"stderr","logging_pod":"tesla-mate-1"}
Config is:
barmanObjectStore:
data:
compression: gzip
jobs: 2
wal:
compression: gzip
maxParallel: 1
destinationPath: s3://homelab
endpointURL: https://x.r2.cloudflarestorage.com
s3Credentials:
accessKeyId:
name: cloudnative-pg-s3-credentials
key: AWS_ACCESS_KEY_ID
secretAccessKey:
name: cloudnative-pg-s3-credentials
key: AWS_SECRET_ACCESS_KEY
region:
name: cloudnative-pg-s3-credentials
key: REGION
retentionPolicy: "10d"
I disabled compression for my full backups, which without looking at the code, I assume all means the parts in the multi part upload (except the last) are all the same size…
I had a look at the documentation that Cloudflare publish about their S3 API compatibility and couldn’t see that they don’t properly implement multi-part uploads, but I did see they don’t support the headers related to server side encryption, but I’ve not tested this.
Got it working. A wrong secret reported bad request 😩