coolify icon indicating copy to clipboard operation
coolify copied to clipboard

[Feature]: Backup Manger in the UI

Open peaklabs-dev opened this issue 8 months ago • 22 comments

General Idea

Currently only databases are backed up. Also, only some databases have a restore option. See: https://github.com/coollabsio/coolify/issues/2501. Streamline database backups and offer a one click restore option without having to manually download and upload the backup. Also there should be a second checkbox that allows us to set container backups -> a container backup is just a backup of all the persistent data (volumes) of the container.

Detailed Description

I think it should be pretty straightforward for the pro @andrasbacsai. I thought about all the points and how I would implement it below, hope it helps (can not implement it myself for now as I am not experienced enough with Larvel):

  • [ ] 1. Let us backup the full coolify instance to the cloud and one click restore it -> all settings... The settings are probably stored in the coolify database, so a coolify database backup would be sufficient.
  • [ ] 2. Add other remote storage type for example SFTP and WebDAV, is natively supported in Laravel: https://flysystem.thephpleague.com/docs/
  • [ ] 3. Add WebDav as a remote storage type -> https://flysystem.thephpleague.com/docs/
  • [ ] 4. Set encryption password and encryption type -> the backups sent to remote storage should be fully encrypted if possible for security reasons Encryption is really important especially for databases. Could be done via AES-256 and a password to dycrypt
  • [ ] 5. View all backups in a list. Display all backups like with database backups
  • [ ] 6. One click service restore or database restore from the backup manger UI -> hit restore and it restores Download backup form the cloud to the local machine as soon as we hit restore and then restores it -> overwrites the volume and or the database or the config
  • [ ] 7. Delete backups from the list manually Delete button, when hit deletes locally or on s3 or on SFTP via a remote find command
  • [ ] 8. set an expiration time -> for example, after 7 days, the oldest backup will be automatically deleted from the remote storage solution. Can be done via find with a line like this:
keep_backups=10

# delete old Backups with `find`
ssh "${remote_target}" "find ${remote_dir} -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"
  • [ ] 9. set what to backup -> only database, only files or everything --> everything means full container, database and all configurations set in ccolify Database already works Files would essentially be every volume of the docker container, since all other docker data is not persistent. Both would just let us schedule a cron job to do both. -> As all services run in containers (even in the future when k8s is supported) there is essentially just the need to backup all volumes in the folder (but it is better to let us select backups on a container level/ service level -> just backup the WordPress volumes):
/var/lib/docker/volumes

-> At the Coolify level, we can select Backup All --> and with Backup All, it basically just backs up all the volumes in the Docker volumes directory.

  • [ ] 10. one click manual backup -> Let us click a button which triggers a full backup
  • [ ] 11. only store backups in remote -> so the backups are only stored in remote, not also on the coolify instance storage (secure storage) --> After the backup is uploaded to the remote storage/ sftp or s3 delete all local copies of the backup. with -rm
  • [ ] 12. button to download backup to localhost or my current device (for local backup) -> a simple download button to download the backup to my PC, for restoration of the backups wiche essential just replaces docker volumes it overwrites all fiels with the fiels of the backup
  • [ ] 13. Lock backups, these backups will not be deleted from remote storage automatically can only be deleted manually -> Not idea yet how to implement but maybe move to a differente folder on the remote storage called lock which is not cleaned every time a new backup is sent to remote
  • [ ] 14. Make sure to compress backups with gzip
  • [ ] 15. Also let us set a folder and a subfolder in the cloud bucket where the backups should be stored
  • [ ] 16. Scheduled backups -> Schedule to let us choose hourly, 12 hourly, 24 hourly... -> just use cron to schedule a backup task, like for database but for volume backup
  • [ ] 17. clone a resource and let us select a backup of a volume. So we can rollback to a resource without rolling back an actively running instance. Let us clone the resource and while cloning select a backup for the volumes that will be used so we can use an old state of the persistente data to check for example if the bug is still present there.

Manual script as a demo (source: https://schroederdennis.de/docker/docker-volume-backup-script-sichern-mit-secure-copy-scp-nas/)

#!/bin/bash
# # # # # # # # # # # # # # # # # # # # # # # #
#                Configuration                #
# # # # # # # # # # # # # # # # # # # # # # # #

# Directory to be backed up
source_dir="/var/lib/docker/volumes"
c# Directory in which the backups are to be saved
backup_dir="/opt/docker_backups"
# Number of backups to be kept
keep_backups=10
# Current date and time
current_datetime=$(date +"%Y-%m-%d_%H-%M-%S")
# Name for the backup archive
backup_filename="${current_datetime}-backup.tar"
# Target server information -> SFTP location
remote_user="root"
remote_server="192.168.40.50"
remote_dir="/opt/docker_backups"
# # # # # # # # # # # # # # # # # # # # # # # #
#           End of configuration            #
# # # # # # # # # # # # # # # # # # # # # # # #

remote_target="${remote_user}@${remote_server}"
backup_fullpath="${backup_dir}/${backup_filename}"
 
# Shut down Docker container -> Is recommended but probably not needed as it is bad to shot down production containers
docker stop $(docker ps -q)
# Create the backup archive
tar -cpf "${backup_fullpath}" "${source_dir}"
# Restart the Docker container
docker start $(docker ps -a -q)
# Compress the backup archive
gzip "${backup_fullpath}"
backup_fullpath="${backup_fullpath}.gz"
# Copy the backup to the target server with SCP without password
scp "${backup_fullpath}" "${remote_target}:$remote_dir/"
# Delete older local backups with `find`.
find "$backup_dir" -type f -name "*-backup.tar.gz" -mtime +$keep_backups -exec rm {} \;
# Delete older remote backups with `find`.
ssh "${remote_target}" "find ${remote_dir} -type f -name '*-backup.tar.gz' -mtime +$keep_backups -exec rm {} \;"
 
echo "Backup has been created: ${backup_fullpath} and on ${remote_target} copied."

peaklabs-dev avatar Jun 08 '24 15:06 peaklabs-dev