backup
backup copied to clipboard
Temporary files and other unexpected issues.
I did not expect that during first pass all chunks are created in temp-files. The app does not use the temporary directory defined in the config/config.php but the one defined in php.ini under 'sys_temp_dir'. My normal /tmp dir is tmpfs and limited in space. The result was an infinite loop of backup attempts, until I debugged the problem.
Than I tried a full backup with the App-Data directory on a sftp share on another machine in the same rack, since I have > 1 TB of data to backup. I expected that using an external drive would save storage space on the server but then I noticed, that those temporary files created in the temp-dir are not deleted after written to sftp mounted external drive. That lookes like a bug.
In the next attempt, I configured the App-Data folder on the same machine (as external drive on local machine). Now it deletes every temporary chunk after written to App-Data folder as expected.
Btw: I observed that using sftp mounted external drives with other options as "username and password" (e.g. with "Global Credentials") does not work at all with the backup app, although one can mount those drives and write to them. but that looks like another issue.
As you see, I needed more than 3 attempts to get it work (and I think I am not the only one). Thus I tried to get the backup app back to its original state with ./occ backup:reset. Unfortunately that does not reset the app completely to the original state like I expected (like a make clean). The enqueued jobs seems to be still present since the "Restoring Points history" still shows future storing points.
This backup app is way slower than my previous scripted backup procedure but those backups were not with all the meta-data but one big tarball with the complete data inside (beside server-/app-files and database-dump) and a reconstruction was only possible with the complete data exactly in the state is was while creating, not single files. Therefore my first pass only used about 6-8 hours maintenance mode for 1 TB. With this app I need up to15 hours with the same hardware. But at the end of the day, since partial backups use way less resources, this app is my choice for the future.
All in all good work!
(Edit: Typo)
(too many bug reports, but please try 1.0.4 that might fix your issue with external appdata)
I will come back to you with a detailed answer to each of your issues in the next few days!
Than I tried a full backup with the App-Data directory on a sftp share on another machine in the same rack, since I have > 1 TB of data to backup. I expected that using an external drive would save storage space on the server but then I noticed, that those temporary files created in the temp-dir are not deleted after written to sftp mounted external drive. That lookes like a bug. In the next attempt, I configured the App-Data folder on the same machine (as external drive on local machine). Now it deletes every temporary chunk after written to App-Data folder as expected.
Can you tell me if the issue with not deleted temp files is still present with 1.0.4 ?
As you see, I needed more than 3 attempts to get it work (and I think I am not the only one). Thus I tried to get the backup app back to its original state with
./occ backup:reset. Unfortunately that does not reset the app completely to the original state like I expected (like amake clean). The enqueued jobs seems to be still present since the "Restoring Points history" still shows future storing points.
./occ backup:reset --uninstall should do the trick
This backup app is way slower than my previous scripted backup procedure but those backups were not with all the meta-data but one big tarball with the complete data inside (beside server-/app-files and database-dump) and a reconstruction was only possible with the complete data exactly in the state is was while creating, not single files. Therefore my first pass only used about 6-8 hours maintenance mode for 1 TB. With this app I need up to15 hours with the same hardware.
1.1 will improve some process, not sure if performance during 1st-pass will be affected. But yes, there is no way to compete against a program in C :)
I am verry sorry for my late answer. Despite all the initial enthusiasm for your backup solution, it is practically unusable due to the long downtime. Hence I made my own backup solution with a second machine and unison. To create a consistent copy on that second machine, it only needs maintenance-mode downtime from les than a minute. The backup can then be expensively compressed and stored on that second machine while my main instance is online and running.