btrbk icon indicating copy to clipboard operation
btrbk copied to clipboard

Sending 3GB snapshot to 930GB SDCard is Erring Out.

Open DiagonalArg opened this issue 4 months ago • 3 comments

btrbk version 0.32.6

I have just completed a btrfs balance on an SDCard, and btrfs fi show indicates that the card has 24GB used or a total of ~932GB. I have btrbk sending snapshots, the first of roughly 2.8 GB to that card:

$ sudo btrfs fi du -s ./home.20250901T0800/
     Total   Exclusive  Set shared  Filename
   2.78GiB   524.67MiB     2.25GiB  ./home.20250901T0800/
$ sudo btrfs fi du -s ./home.20250907T0000/
     Total   Exclusive  Set shared  Filename
   2.86GiB   511.94MiB     2.30GiB  ./home.20250907T0000/
$ sudo btrfs fi du -s ./home.20250912T0000/
     Total   Exclusive  Set shared  Filename
 302.84GiB     7.54GiB   229.05GiB  ./home.20250912T0000/

Instantly on starting btrbk, btrfs fi show indicated that the whole card is in use:

Label: none  uuid: xxxxxxxxxxxxxxx
        Total devices 1 FS bytes used 22.46GiB
        devid    1 size 931.95GiB used 931.95GiB path /dev/mapper/sd.luks.backup

Then, after 4-5 minutes, btrbk produces an error on that first snapshot:

[snip...]
!!! /home/dev/Backup/BtrBk/home/home.20250901T0800
!!! Target "/home/dev/Backup/BtrBk/home" aborted: Failed to send/receive subvolume
NOTE: Some errors occurred, which may result in missing backups!
Please check warning and error messages above.

At the end of the operation, after the return of btrbk, btrfs fi show is again indicating only 24GB used.

Label: none  uuid: xxxxxxxxxxxxxxxxx
        Total devices 1 FS bytes used 21.60GiB
        devid    1 size 931.95GiB used 24.07GiB path /dev/mapper/sd.luks.backup

I have had this identical problem with two different cards, both of well-known manufacturers. I have tested those cards with f3 and they are not fraudulently reporting their size. What might be happening?

DiagonalArg avatar Sep 15 '25 12:09 DiagonalArg

I turned on send_compressed_data. I get similar behavior with the initiation of writing to the disk. The disk usage jumps up to 633GB. 30 second later it's at 931.95GB. It took longer to err out, but this time, when it did, it stayed full:

        Total devices 1 FS bytes used 19.00GiB
        devid    1 size 931.95GiB used 931.95GiB path /dev/mapper/sd.luks.backup

Neither btrbk clean nor prune report any action taken. It appears two snapshots (one full and one incremental) did arrive. These are the two small (less than 3GB) snapshots. It erred out during the second (302GB) incremental. I'm rebalancing now, and the disk is horribly imbalanced.

Balancing /home/dev/Backup/: **0%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=0
Done, had to relocate 0 out of 935 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=0
  SYSTEM (flags 0x2): balancing, usage=0
Done, had to relocate 0 out of 935 chunks

Balancing /home/dev/Backup/: **2%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=2
Done, had to relocate **887** out of 935 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=2
  SYSTEM (flags 0x2): balancing, usage=2
Done, had to relocate 1 out of 875 chunks

Balancing /home/dev/Backup/: **5%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate **842** out of 875 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=5
  SYSTEM (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 441 chunks

Balancing /home/dev/Backup/: **10%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=10
Done, had to relocate **411** out of 441 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=10
  SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 436 chunks

Balancing /home/dev/Backup/: **20%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=20
Done, had to relocate **409** out of 436 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=20
  SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 225 chunks

Balancing /home/dev/Backup/: **30%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=30
Done, had to relocate **201** out of 225 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=30
  SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 122 chunks

Balancing /home/dev/Backup/: **40%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=40
Done, had to relocate **97** out of 122 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=40
  SYSTEM (flags 0x2): balancing, usage=40
Done, had to relocate 1 out of 71 chunks

Balancing /home/dev/Backup/: **50%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=50
Done, had to relocate **47** out of 71 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=50
  SYSTEM (flags 0x2): balancing, usage=50
Done, had to relocate 1 out of 62 chunks

Balancing /home/dev/Backup/: **70%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=70
Done, had to relocate **43** out of 62 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=70
  SYSTEM (flags 0x2): balancing, usage=70
Done, had to relocate 1 out of 38 chunks

Balancing /home/dev/Backup/: **90%**
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=90
Done, had to relocate **18** out of 38 chunks

Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=90
  SYSTEM (flags 0x2): balancing, usage=90
Done, had to relocate 2 out of 30 chunks

Statistics (show): /home/dev/Backup/
Label: none  uuid: xxxxxxxxxxxxxxxxxxxxxxxxxx
	Total devices 1 FS bytes used 18.99GiB
	devid    1 size 931.95GiB used 29.07GiB path /dev/mapper/sd.luks.backup

DiagonalArg avatar Sep 15 '25 14:09 DiagonalArg

I've now tried this with a different, 5.6GB snapshot, with the same behavior. While the snapshot send completes, right at the start btrfs fi show jumps to 931.95GB. Subsequent balance shows a badly imbalanced disk.

DiagonalArg avatar Sep 15 '25 15:09 DiagonalArg

Advice from the btrfs IRC sent me to this issue. I was, indeed, using ssd_spread as a mount option.

DiagonalArg avatar Sep 15 '25 16:09 DiagonalArg