borg icon indicating copy to clipboard operation
borg copied to clipboard

Disk write throughput with --progress

Open brianjmurrell opened this issue 1 year ago • 11 comments

Have you checked borgbackup docs, FAQ, and open GitHub issues?

Yes

Is this a BUG / ISSUE report or a QUESTION?

RFE

System information. For client/server mode post info for both machines.

Your borg version (borg -V).

1.1.17

Operating system (distribution) and version.

AlmaLinux 8.7

Hardware / network configuration, and filesystems used.

Local ext4 filesystem.

How much data is handled by borg?

N/A

Full borg commandline that lead to the problem (leave away excludes and passwords)

N/A

Describe the problem you're observing.

It would be nice when evaluating compression algorithms for example, to know how much disk throughput borg is achieving, if, say one wanted to tune compression to about the same speed as the disk.

But even compression evaluation aside, it's still nice to see on a backup progress report what kind of throughput is being seen.

Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.

N/A

Include any warning/errors/backtraces from the system logs

N/A

brianjmurrell avatar Feb 01 '23 02:02 brianjmurrell

You can use iotop for such needs.

infectormp avatar Feb 01 '23 06:02 infectormp

Just a notice: You are using outdated borg version.

infectormp avatar Feb 01 '23 06:02 infectormp

You can use iotop for such needs.

I quite disagree. iotop tells me for a sample window what the speed might be but it does not give me an ongoing measure of the write speed of the entire backup job.

I am thinking more along the lines of the progress output of curl, etc.

Funny enough, even the --stats at the end doesn't provide any kind of throughput measurement. It provides the numbers for one to do one's own math, but I think, just like --stats does in rsync a summary of the throughput would be useful there too.

You can use iotop for such needs.

Understood. Apparently there is a portability issue with 1.18 (see the last paragraph) that prevents it from being updated in EPEL8.

brianjmurrell avatar Feb 01 '23 13:02 brianjmurrell

iotop tells me for a sample window what the speed might be but it does not give me an ongoing measure of the write speed of the entire backup job.

iotop displays I/O bandwidth read and written by each process/thread taken from Linux kernel (from man pages)

infectormp avatar Feb 01 '23 14:02 infectormp

@infectormp I know very well what iotop is and does and while I appreciate the suggestion, it does not satisfy in the least what this ticket is requesting. I am looking for complete job throughput, not just snapshots of windows of (real-)time.

brianjmurrell avatar Feb 07 '23 14:02 brianjmurrell

@brianjmurrell There is an optional flag for borg create to view the progress. The following is an excerpt from the docs.

The --progress option shows (from left to right) Original, Compressed and Deduplicated (O, C and D, respectively), then the Number of files (N) processed so far, followed by the currently processed path.

Now I understand this doesn't cover the whole ask of this ticket. Basically, the ask is that borg displays the rate at which these 3 categories (O, C, and D) are being processed - live and after the operation?

Michael-Girma avatar Mar 21 '23 06:03 Michael-Girma

Without being able to see the actual output of --progress[1], I could only guess yes. Live throughput (like rsync does) and then the final results.

[1] I've switched from Borg to using VDO to get the benefits of compression and de-duplication but with a native file system UX. The lack of (performant) native file system UX was a show-stopper for me with Borg. I frequently want to query a file through history. Like looking for a command in a .bash_history through all backups for example.

VDO seems to be achieving compression and de-duplication on par with Borg. Here's a comparison of VDO and Borg for my oldest (and certainly not the most de-duplicatable, given the spans of time -- yearly -- between them) backups:

		Orig. Size 	Compressed &	Total Actual
				Deduped Size	   Used
yearly.11
	Borg	123.17		82.03		82.03
	VDO	119		84.6		84.6
yearly.10
	Borg	170.66		73.58		155.61
	VDO	165		69.1		153.7
yearly.9
	Borg	218.76		46.99		202.59
	VDO	210		50.5		204.2
yearly.8
	Borg	258.01 		31.69		234.29
	VDO	247		35.1		239.3

brianjmurrell avatar Mar 21 '23 13:03 brianjmurrell

@ThomasWaldmann How should we address this?

Michael-Girma avatar Mar 21 '23 18:03 Michael-Girma

There were quite some changes to the compressed size related stats in master (because csize needed to be removed from chunk list entries), so I guess this means:

  • no change in 1.2.x (as it maybe could not get forward ported)
  • only work on something for master (borg2)

Throughput shown for:

  • original size (likely that would give impressive numbers for unchanged files, esp. for 2nd+ backups, but it would be a bit "synthetic" because it is just a theoretical value, like "if this was tar, that would be the throughput". only for a file that is actually read/chunked, A and M status, this would mean the input read rate).
  • compressed size: maybe not easily possible (csize removed in master), also similarly synthetic as the above, like "if this was .tgz, that would be your throughput".
  • deduplicated compressed size: that would be what actually gets written to the repo. it would show shockingly low throughput for 2nd+ backups with few changed files, just because it does not need to write much to the repo. only for lots or bigger added files or heavily modified files this would show interesting rates.

So, not sure if this is that useful after all.

ThomasWaldmann avatar Mar 21 '23 19:03 ThomasWaldmann

Frankly both throughputs would be interesting and fun even.

I.e. What was the actual physical throughput of bytes actually written to disk, so that I could get an idea of how much faster things could go if I bought faster disk.

But also equally interesting (and fun even!), would be "what is the effective bandwidth of the disk if we were writing all of those duplicate blocks out in full, and uncompressed". I.e. how much did I speed my disk up by de-duplicating and compressing? Put another way "how fast of a disk would I need to match the savings that de-duplicating and compressing are giving me"?

brianjmurrell avatar Mar 21 '23 20:03 brianjmurrell

There was actually an experimental branch once which would exactly tell you these things: whether reading input files, compression, encryption or writing output files was the bottleneck and if so by how much etc. but it turned out that this incurred a rather significant overhead and significantly reduced performance for small files.

enkore avatar Aug 29 '23 17:08 enkore