btrfs-progs icon indicating copy to clipboard operation
btrfs-progs copied to clipboard

how to reduce global reserve?

Open yamato225 opened this issue 3 years ago • 13 comments

Can I reduce global reserve space?

I made 512MB btrfs partitions. After writing files, the partition becomes full. So I tried to remove the files but 'No space left on device' was occured.

So I expand the partition to 1GB. After that, the result of btrfs fi usage is shown below.

Overall:
    Device size:                   1.07GiB
    Device allocated:              1.07GiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Used:                       1004.00KiB
    Free (estimated):            119.33MiB      (min: 119.33MiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:120.00MiB, Used:684.00KiB (0.56%)
   /dev/sda6     120.00MiB

Metadata,DUP: Size:481.50MiB, Used:144.00KiB (0.03%)
   /dev/sda6     963.00MiB

System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
   /dev/sda6      16.00MiB

Unallocated:
   /dev/sda6       1.00MiB

This shows "Global Reserve" uses 512MB. Actually, the partition doesn't have files, but Global reserve area uses 512MB and I can use remain area only.

I want to know:

  • Can I remove or reduce Global reserve area?
  • Can I configure the partition, not too much Global reserve area?

yamato225 avatar May 18 '22 07:05 yamato225

I tried these commands, but it doesn't work.

  • btrfs scrub start -B
  • btrfs check --repair
  • btrfs check --force --clear-space-cache v1
  • btrfs check --force --clear-space-cache v2

yamato225 avatar May 18 '22 08:05 yamato225

Global reserve is scaled automatically and this cannot be changed nor turned off. It's a reserve to let filesystem work in edge cases like when there's no allocatable space (by user) but data need to be removed (and due to COW) need some internal allocation.

kdave avatar May 18 '22 14:05 kdave

A 1G partition is small and the normal split of data and metadata does not work very well here, because the typical chunk size for data is 1G and for metadata 256M, so this it's hard to fit under 1G. Please post output of btrfs fi df. For small paritions (a few gigabytes, there's no exact number recommended) it's possible to use the mixed mode mkfs.btrfs --mixed that will share the same chunks for data and metadata so it's possible to utilize the space better.

kdave avatar May 18 '22 14:05 kdave

I tried these commands, but it doesn't work.

You should read what the commands actually do before randomly trying anything and hoping that something will work

* btrfs scrub start -B

Does not harm, but only verifies checksums

* btrfs check --repair

There's a fat warning in the documentation to exactly not to run this command without a good reason.

* btrfs check --force --clear-space-cache v1
* btrfs check --force --clear-space-cache v2

Fixes specific problems or is used after switching from/to space cache v1/v2.

kdave avatar May 18 '22 14:05 kdave

Something is wrong here...a 1G filesystem should have a much smaller global reserve.

I have filesystems from 14G to 96G that have 16M to 227M global reserve. 128G and larger hit the cap at 512M. A 1G filesystem should be at the bottom end of that range, not the top.

@yamato225 what kernel version is this?

Zygo avatar May 18 '22 15:05 Zygo

@kdave Thank you for kindly letting me know many things!

@Zygo

I use Ubuntu 18.04, the kernel version 4.15.0-163.

Then, I'll post more details about my environment and how I can get this later.

yamato225 avatar May 18 '22 15:05 yamato225

My environment:

  • CPU: Intel Atom E3950
  • Mem: 8GB
  • Disk: SSD 64GB
  • OS: Ubuntu Server 18.04

I create btrfs filesystem on 1GB partition at /dev/sda7.

command options how I created:

mkfs.btrfs -f /dev/sda7

How to occur:

The btrfs filesystem is mounted at /var/log.

mount options( in fstab configuration):

/dev/sda7   /var/log  btrfs  ssd,nofail,noatime,compress=zstd    0    0   

To test the robustness, I cut the power repeatedly during writing logs. 3 or 4 times tries brings maximum Global reserve.

yamato225 avatar May 19 '22 00:05 yamato225

I found the code which limits global reserve. https://github.com/kdave/btrfs-devel/blob/f993aed406eaf968ba3867a76bb46c95336a33d0/fs/btrfs/block-rsv.c#L396

This looks the limit is fixed and included in the code, not dynamic or modifiable.

I feel this looks not good, because btrfs can be created in less than 512MB space. I mean this can be configure by option like this:

mount -t btrfs -o global_reserve=128MB .....

How's your opinions?

yamato225 avatar May 19 '22 21:05 yamato225

That code sets the maximum size. The minimum size is based on the size of the filesystem (or should be, but seems to be failing here).

Zygo avatar May 19 '22 22:05 Zygo

Just after mkfs.btrfs, global reserve is 16MB. So I feel minimum size setting works correctly. However, the size becomes big by writing files, removing files and cutting the power.

yamato225 avatar May 19 '22 22:05 yamato225

I've tried to change here: https://github.com/kdave/btrfs-devel/blob/f993aed406eaf968ba3867a76bb46c95336a33d0/fs/btrfs/block-rsv.c#L396

- block_rsv->size = min_t(u64, num_bytes, SZ_512M);
+ block_rsv->size = min_t(u64, num_bytes, SZ_32M);

This appears to be working correctly in my environment.

yamato225 avatar May 24 '22 10:05 yamato225

Such change may work in the specific case but will break elsewhere. The space reservations are tricky and for example deletion of subvolumes with lots of shared extents may fail due to that.

kdave avatar May 24 '22 16:05 kdave

Such change may work in the specific case but will break elsewhere.

Yeah, I just confirmed. I feel the maximum value should be variable to allow the user change it and fit various field.

yamato225 avatar May 24 '22 16:05 yamato225