Print out the backup size when listing snapshots (enhancement)
Output of restic version
Any.
Expected behavior
Adding an extra column to list the size of the backup (in bytes) can be very useful. It'll help distinguish between different backups just by checking their size.
$ restic snapshots
ID Date Host Tags Directory Size
--------------------------------------------------------------------------
5b969a0e 2016-12-09 15:10:32 localhost myfile 390865
Actual behavior
$ restic snapshots
ID Date Host Tags Directory
----------------------------------------------------------------------
5b969a0e 2016-12-09 15:10:32 localhost myfile
Thanks for the suggestion. What would you expect the size to be? Since all data is deduplicated, a "size" for a particular snapshot is not that easy to determine. Would that be the size of all data referenced in that snapshot? Or the data that was not yet stored in the repo when the snapshot was taken (new data)?
This is a very good proposal. The number on the right should be the cumulative size of blobs added to the repo. It is the most interesting quantitative parameter of any backup run.
How much space did my incremental wasted this night? Oops, it's 10x more than last night, I left some junk somehere (or forgot to put some excludes), I better clean it up. ;)
+1 for @zcalusic suggestion
The problem with the size of "new" blobs (added by that particular snapshot) becomes less relevant over time, because those blobs will be referenced by later snapshots. In addition, when earlier snapshots are removed, the number of blobs referenced by a particular snaphot will grow.
I think it's valuable to print this information right after the backup is complete, and we can also record it in the snapshot data structure in the repo. I've planned to add some kind of 'detail' view for a particular snapshot, and I think it is a good idea to display the number and size of new blobs there, but in the overview (command snapshots) it's not relevant enough. There, I think restic should display the whole size of a particular snapshot (what you get if you were to restore it), because that doesn't change.
i was instantly reminded of the statistics flag of rdiff-backup (see https://www.systutorials.com/docs/linux/man/1-rdiff-backup-statistics/ ). sometimes it's nice to see some sort of delta between 2 snapshots.
Indeed, but that's a different thing: It's computed live and compares two snapshots. We may add something like that, but doing that for the snapshots overview list is too expensive (at least with the information we have available in the data structures right now).
it could be useful to know the size of the data 'unique' to the snapshot vs the total size (including dedup'd data) of the snapshot.
IMO it would be quite useful to have an idea of how much extra space was used for a new snapshot. This could be even just physical storage space computed during backup and stored in snapshot's metadata. If some snapshot is removed, this metadata should be then invalidated in all future snapshots.
I think i would appreciate such a feature even if nothing else is done in this direction. However, an option of recalculating this "extra size" after some previous backups were removed would also be nice. I think this is what BackupLoupe does for Time Machine on Mac OS. (The deduplication in Time Machine is very basic, but the problem of defining the "size of a snapshot" is the same).
The most fundamental thing I'd like to know off the bat is how much disk space would the contents of snapshot X consume on the target disk if I restored it.
Preferrably I would also be able to get this information for only a subset of the files, e.g. if there was a size command that took the same type of include/exclude options as the restore command. Or if the restore command has an option that makes it just report statistics like this instead of actually restoring.
Thanks @rawtaz for pointing me at this issue.
I'm storing backups in metered storage (Backblaze B2). I want to know how much new data I'm creating every time I run a backup. It seems like this ought to be easy to calculate during the backup process; I would be happy if restic would simply log that as part of concluding a backup...but it seems like it might also be useful to store this as an attribute of the snapshot (so it can be queried in the future).
I am not really interested in anything that requires extensive re-scanning of the repository, since that will simply incur additional charges.
Any news?
Hello
I would like to second this suggestion. In addition to 'How big would this snapshot be if I restored it' for any existing snapshot and 'how much did this snapshot add' when a snapshot is created, I have a third suggestion:
It would also help to be able to answer the question: 'By how much would my repo size reduce if I remove the following snapshot(s)?' This would be useful in restic forget --prune --dry-run when deciding whether to drop snapshots. For example, I recently dropped 20 of the 40 snapshots in a repo, and it reduced the size from 1.1GB to 1.0GB. Had I known this would only have saved 100MB, I likely would have kept the older snapshots.
@mholt made #1729 to show some stats. Maybe he can chime in to say something about the progress of this PR.
@dimejo It's done -- just waiting for it to be reviewed/merged. :)
Jumping on a really old issue here but to me there are 2 important size fields when thinking of snapshots
- The snapshot size in storage
- The restore size
e.g.
$ restic snapshots
ID Date Host Tags Directory Snapshot Size Restore Size
--------------------------------------------------------------------------------------------------
5b969a0e 2016-12-09 15:10:32 localhost myfile 10 MB 57 GB
At least then I could tell how much space a single snapshot is using and how much space I need to perform a restore.
As @fd0 already pointed out, printing the size on every invocation of restic snapshots would be a pretty expensive command. But you can use restic stats to print the size of individual snapshots or the whole repository.
I think it's valuable to print this information right after the backup is complete, and we can also record it in the snapshot data structure in the repo. I've planned to add some kind of 'detail' view for a particular snapshot, and I think it is a good idea to display the number and size of new blobs there, but in the overview (command
snapshots) it's not relevant enough. There, I think restic should display the whole size of a particular snapshot (what you get if you were to restore it), because that doesn't change.
Great idea! Is this enhancement in the queue? The total size of the deduplicated data in the repository would also be helpful in such a synopsis.
Any update for this feature? It's very useful to be able to see each snapshot size and its restore size.
+1
Not at this point. If there are any updates, it'll show in this issue.
I'd love to see this as well, particularly as a "sanity check" to see if one particular backup perhaps accidentally added some huge files that I don't need backed up (e.g. because I made a mistake in file exclusion rules). And, if so, to figure out which snapshot that was.
Being able to then inspect a snapshot and see just which directory exactly is causing the blowup, is particularly useful. If you can only compare it against "all other backups, past and future", then you can at least use it to find large files that change often and thrash the backup. If you can compare it against "only past snapshots", you can easily discover which file exactly it was, that is causing a particular snapshot to have grown so large.
For comparison, here is how Mac OS's TimeMachine does it:
$ tmutil calculatedrift /Volumes/ex1806/Backups.backupdb/my-machine/
2018-06-16-155213 - 2018-06-25-205709
-------------------------------------
Added: 5.3G
Removed: 1.0G
Changed: 5.4G
2018-06-25-205709 - 2018-07-16-160709
-------------------------------------
Added: 3.5G
Removed: 1.6G
Changed: 2.0G
...
Every such block takes about a few minutes to calculate, on an external USB (spinning 3.5") disk. Rule of thumb on my setup is 1min/GB changed.
You can drill down into directories with reasonable speed:
$ tmutil uniquesize /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh\ HD\ -\ Data/Users/hraban/
133.2M /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh HD - Data/Users/hraban
$ time tmutil uniquesize /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh\ HD\ -\ Data/Users/hraban/Library/
66.9M /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh HD - Data/Users/hraban/Library
real 0m5.991s
user 0m0.030s
sys 0m0.140s
This is not the same as "total file size" (i.e.: tmutil uniquesize takes deduplication into consideration):
$ time du -sh /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh\ HD\ -\ Data/Users/hraban/
164G /Volumes/ex1806/Backups.backupdb/my-machine/2021-12-14-195712/Macintosh HD - Data/Users/hraban/
real 4m0.598s
user 0m1.000s
sys 0m18.789s
Context, for those unfamiliar with mac's time machine: time machine uses filenames as keys and does no content inspection at all. Renaming a file leads to an entire new copy being stored in the backup. 1 bit changed in a file (and timestamp updated): same, full new copy in the backup (the pathological case for Time Machine is a large sqlite3 file with frequent, small changes). It's got some similarities to rsync, if you squint right. On the plus side, the backup target is a regular(ish) directory, so you can open and inspect it with your regular tools.
It would be nice if you could use a (hypothetical) restic equivalent to figure out if it's actually handling that pathological case well. In the case of frequent minor changes to a large sqlite file: is restic actually able to reuse parts of it from previous snapshots? How much? -- or can you already answer this question using existing tools?
Really would like to emphasize how important this feature is. Regular size checks are a part of backup-reviews to ensure to backup does not suddenly backup nothing / too much, which mostly can bee seen when the backup size goes up or down unreasonable.
Thank you for the effort!
@EugenMayer FYI, if you happen to be running Restic locally, my restic-runner script optionally outputs the change in repository size after a backup run. It's helped me catch several times when new, large files were backed up that I didn't want backed up. https://github.com/alphapapa/restic-runner
It's understood that calculating the size of a snapshot is expensive. So, adding it to the snapshots command by default is going to make it extremely slow. Still, there may be situations, as the ones described by other people here, where this information would be so important to me that I'd be willing to wait even 2 hours for a result. So, maybe stat (or a slightly simpler version of it) could in fact be added as a flag to the snapshots command, properly documented as something that should be used only when absolutely necessary.
Having said that, what most people have asked for here is a lot simpler than that. Already today, restic calculates the size of the snapshot after each backup run. Why not simply add this information as a string to the snapshot in the repo? This information could be then easily added as to the result set of the snapshot command, as an additional column called "Reported snapshot size".
Sure, if you're going to implement this now, it will look a bit ugly since older snapshots won't yet have this information. Personally, I'd be fine with it.
And thanks for developing restic, it's a great tool.
Yes, I agree. Any metadata could be added (by restic developers) to a snapshot during creation and reading these metadata should not slow anything down. The thing is, it's been 5 years, so I'm not really holding my breath.
Having said that, what most people have asked for here is a lot simpler than that. Already today, restic calculates the size of the snapshot after each backup run. Why not simply add this information as a string to the snapshot in the repo? This information could be then easily added as to the result set of the snapshot command, as an additional column called "Reported snapshot size".
Should it be recalculated each time older snapshots are removed?
Having said that, what most people have asked for here is a lot simpler than that. Already today, restic calculates the size of the snapshot after each backup run. Why not simply add this information as a string to the snapshot in the repo? This information could be then easily added as to the result set of the snapshot command, as an additional column called "Reported snapshot size".
Should it be recalculated each time older snapshots are removed?
No. If it's a matter of wording, let's call it "upload size", or anything else. It's just logged information. As with any other log, this information should not change later down the road.
No. If it's a matter of wording, let's call it "upload size", or anything else. It's just logged information. As with any other log, this information should not change later down the road.
But after the previous snapshot will be deleted, this information will become meaningless. Worse, unless the previous snapshot hash is recorded with this information, there will be no indication that the recorded information is meaningless.
I don’t see what value there would be in knowing the upload size, in most cases. I can see a few exceptions.
I backup a database driven app that uses block storage, similarish to restic itself in that blocks are added, but a “purge” is only periodic. It would be useful to spot when a big uploaded happened as there is basically no value in deleting intermediary snapshots, but when it does a purge and rebuilds storage blocks, there is suddenly tons of useless data, and spotting an upload size spike would make it easy to delete everything older to get a decent amount of space back.
But this is an edge case at best, in most cases the upload size isn’t actually going to provide useful information.
Because of restic’s nature, the only thing I can see as useful would be a way to propose a deletion, and get a value of how much space could be released. I’m unclear if this can be calculated in a dry-run, but there was a post here that seemed to suggest maybe?
I’m also missing this. I would add two columns, though:
- “net size”, i.e. the size of the restore area, were I to restore the full snapshot (modulo filesystem cluster size; I’d be fine with just adding the individual files’ sizes, or rounding them up to 512 bytes or 1/2/4/8 KiB)
- “snapshot size”, i.e. amount of storage added when this snapshot was added, i.e. server-side size of the snapshot minus amount saved due to deduplication from prior snapshots (note that this can and will change when removing prior snapshots, and yes, that is expected)