export and import
Description of problem
Most dokku plugins offer a :export and :import feature that can be used to backup and restore to a new server if required.
This plugin does not offer any similar options, it would be a really nice improvement
It seems we have access to data-dir, but it's probably unsafe to rsync it's data while other processes are writing to it.
Does clickhouse have a way to do import/export? Thats what is blocking me from implementing this.
It's a good question, I read https://clickhouse.tech/docs/en/operations/backup/
The best, officially supported option seems to be https://clickhouse.tech/docs/en/operations/utilities/clickhouse-copier/
It's not perfect (not atomic) but should be a good start :)
We don't customize the image. Is that process available in the official clickhouse image?
The binary clickhouse-copier is available in the official image, however I ended up using this repo: https://github.com/AlexAkulov/clickhouse-backup which was mentioned on the official doc: https://clickhouse.tech/docs/en/operations/backup/
It was fairly simple:
dokku clickhouse:enter {{NAME}}
wget https://github.com/AlexAkulov/clickhouse-backup/releases/download/v0.6.4/clickhouse-backup.tar.gz
tar -zxvf clickhouse-backup.tar.gz
cd clickhouse-backup
CLICKHOUSE_DATA_PATH=/var/lib/clickhouse CLICKHOUSE_USERNAME={{username}} CLICKHOUSE_PASSWORD={{password}} ./clickhouse-backup create test
After that, it's simply a matter of zipping up /var/lib/clickhouse/backup (or /var/lib/dokku/services/clickhouse/{{NAME}}/data/backup on the main host).
I dunno if this can be integrated or not into this repo. I agree it is less straightforward than a simple pg_dump.
If the binary is indeed available, then it should be possible to do backups in the same way we do them for everything else.
Is there a way to import the backup?
I haven't looked too much into clickhouse-copier, as they mentioned that it is mostly used to replicate data from one db to another, not to generate a file. I guess we could do something like "create another db, copy data there, disconnect, zip the mounted files from the new db, drop the new db", and the opposite for export? But it seems more complex than strictly necessary, which is why I used clickhouse-backup (not copier) instead.
Could we use that docker image from the linked repo against the service? And what does the restore process look like?
Could we use that docker image from the linked repo against the service?
Yes, but my docker knowledge is very limited, and I wasn't sure how to resolve the hostname externally (without exposing the service through an amdassador).
Restoring should simply be ./clickhouse-backup restore test
You could start the container as a linked container?
Yes, that would probably work and be the most efficient way of doing the backup
The 2021 roadmap for ClickHouse contains a "backup" feature. The issue which tracks this feature is https://github.com/ClickHouse/ClickHouse/issues/13953
Once that story is done there will be official backup and restore commands which could be used for "export" and "import".
Okay seems like the backup was implemented. Does anyone have good ideas as to how this might be done on our end? Seems like it has to backup to a directory, but we don't currently mount any backup directories afaik...
My assumption is the same. A folder have to be mounted to expose backups. I will run some tests tomorrow.
We could backup to a file and then maybe cat the contents out?
The type of a disk under storage_configuration can be s3. I have to do some testing to see if backup and restore works with that type.
That might make our existing backup/restore code more difficult to integrate with. I'd rather not deviate from that if possible, as it makes it easier to copy/paste code across plugins.
I had a closer look on https://github.com/AlexAkulov/clickhouse-backup. The functionality which uses BACKUP and RESTORE is referenced as "EmbeddedBackup" in the code. During embedded backup, they build a BACKUP query with all tables. Executing that query will result in a single file. Which can be copied, or as you said outputted with cat.
Backups were implemented in the official image, so does that mean we could implement that here?
Backing up
For anyone who might want to back up the clickhouse to S3, here is how I do it.
COMMANDS=$(cat <<'EOF'
echo -e '✅ Setting environment variables'
export CLICKHOUSE_USERNAME=<replace-clikhouse-username>
export CLICKHOUSE_PASSWORD=<replace-clikhouse-password>
export REMOTE_STORAGE=s3
export S3_ACCESS_KEY=<replace-access-key>
export S3_SECRET_KEY=<repalce-secret-key>
export S3_BUCKET=<repalce-bucket-name>
export S3_ENDPOINT=<replace-endpoint>
export S3_REGION=us-east-1
echo "✅ Download clickhouse-backup binaries"
wget https://github.com/Altinity/clickhouse-backup/releases/download/v2.6.1/clickhouse-backup-linux-amd64.tar.gz
tar -zxvf clickhouse-backup-linux-amd64.tar.gz
install -o root -g root -m 0755 build/linux/amd64/clickhouse-backup /usr/local/bin
echo "✅ Creating and uploading backup"
BACKUP_NAME=clickhouse-backup-$(date -u +%Y-%m-%dT%H-%M-%S)
clickhouse-backup create $BACKUP_NAME --rbac --configs
clickhouse-backup upload $BACKUP_NAME
echo "✅ Exit shell. Goodbye!"
exit
EOF
)
dokku clickhouse:enter <clickhouse-instance-name> bash -c "$COMMANDS"
[!NOTE] clickhouse-backup must be run inside the container.
Restoring
To restore, simply repeat the process but run the following toward the end.
clickhouse-backup restore_remote <replace-with-remote-backup_name>
You can set up cronjob to run once per day. Learn more about Altinity Backup for ClickHouse.