[FEAT] Backup shipping to S3-compatible storage
Is your feature request related to a problem? Please describe
Backups are only good if they're somewhere else, Kanidm should support native shipping of backup files to S3 once they're created. (And potentially restores?)
Describe the solution you'd like
Add configuration options to the backup stanzas which support S3-compatible storage, and an implementation which does the job.
Rotation of backups can be a feature, but might be put off.
This includes access key / secret / region / endpoint so non-AWS services can be used.
This should be tested using testcontainers for minio or a similar s3-compatible storage backend (blocked behind a testcontainers feature flag).
Describe alternatives you've considered
Piles of bacon.
Additional context
I like the idea pretty much.
Maybe you want to have a look on https://opendal.apache.org/ to provide access to other storage. But maybe this is too much for this. Just an idea.
I'll have a look at OpenDAL for implementing it, but I don't want to have the project support every possible storage backend - local (or network) disk and s3-compatible storage covers MANY possible deployment methods.
@cuberoot74088 We have had issues when packaging sccache with opendal in other projects, so I want to avoid opendal here.
Maybe the server-side encryption with customer-provided keys (SSE-C) will be the easiest way?
Using server-side encryption with customer-provided keys (SSE-C)
Maybe the server-side encryption with customer-provided keys (SSE-C) will be the easiest way?
Using server-side encryption with customer-provided keys (SSE-C)
that only works if you're using AWS 😄
Why didn't anyone tell me?
Quick look at docs of MinIO, Garage and Ceph indicates SSE-C support.
Firing this command from my linux approved it.
minio-client cp myserver/bucket/file.txt file.txt --enc-c myserver/bucket/=$KEY
....myserver/bucket/file.txt: 26 B / 26 B ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 3 B/s 6
Huh, today I learned. Thanks!
The rustic_core library is likely overkill and, as far as I understand, relies on the previously mentioned opendal library. Nevertheless, it seems to provide all the essential features - encryption, compression, deduplication, verification, and rotation - so it may still be worth considering.
We are trying to be careful about pulling in more libraries. We already have so many in the project, and we are trying to be conservative in when we bring in more, and what they already depend on.
I'm using https://github.com/benbjohnson/litestream to backup the kanidm's SQLite database to an S3-compatible storgae, though I guess it's not the most proper way since kanidm has its own database and cache?
In theory that's fine. We have a cache yes, but after each transaction we flush/write it with sqlite underneath so provided that "sqlite implements its concurrency promises correctly" then it should be okay.
However, it's probably not the best option - you may be better to use our cron-backup system and write those out instead.
In theory that's fine. We have a cache yes, but after each transaction we flush/write it with sqlite underneath so provided that "sqlite implements its concurrency promises correctly" then it should be okay.
However, it's probably not the best option - you may be better to use our cron-backup system and write those out instead.
Ah yes, I forgot to mention that I do have file-level backups to offsite locations as well, since it's setup for the whole server and requires no specific configs for the kanidm.
Then that should be fine. :)