BTRFS reporting incorrect disk usage
LXD 4.0.5 with a custom partition formatted as BTRFS and this set as the storage pool.
I noticed that disk space usage was getting lost somewhere, you can reproduce like this.
Create an Ubuntu container, send a patch request to /instances/ubuntu setting a the disk size to 5GB
$ lxc info ubuntu
Disk usage:
root: 16.38kB
I then install mysql-server and apache2
Note on a LXD host with a ZFS storage pool the following commands returns usage as
1.25GB
$ lxc info ubuntu
Disk usage:
root: 519.21MB
I create a snapshot by sending a post request with the following payload
{
"stateful": false,
"name": "ubuntu-20210216-01"
}
and disk usage drops to the size of the snapshot, I tested on a LXD host with ZFS and this does not happen.
Disk usage:
root: 9.37MB
I then got curious, and deleted the snapshot , the disk usage remains the same.
Disk usage:
root: 9.37MB
Other observations:
- LXD reports snapshot size for BTRFS storage as 10mb
- LXD reports snapshot size for ZFS storage 1.32mb (after 3 minutes)
Note running du -h -s / within the container on both ZFS an BTRFS hosts return exactly the same results 1.2GB
@stgraber just wondering if you have seen this? Taking a snapshot overwrites the disk usage size and i am not sure why disk space is different when installing the same apps.
Yeah, I've had that issue in my queue for a little while now.
Confirmed this is still an issue.
OK so I figured it out, and am not sure if its actually a bug. This depends on your perspective of what is being asked with lxc info <instance> or more specifically lxc query /1.0/instances/c1?recursion=1.
The reason appears to be that the size show here is only showing the size consumed by the main volume (relative to its earlier snapshots).
We can see the output from the underlying btrfs tool:
After initial launch:
sudo btrfs qgroup show -e -f /var/lib/lxd/storage-pools/btrfs/containers/c1
qgroupid rfer excl max_excl
-------- ---- ---- --------
0/274 475.20MiB 16.00KiB none
After installing apache2 and stopping the instance:
sudo btrfs qgroup show -e -f /var/lib/lxd/storage-pools/btrfs/containers/c1
qgroupid rfer excl max_excl
-------- ---- ---- --------
0/274 550.42MiB 82.80MiB none
After taking a snapshot:
sudo btrfs qgroup show -e -f /var/lib/lxd/storage-pools/btrfs/containers/c1
qgroupid rfer excl max_excl
-------- ---- ---- --------
0/274 634.02MiB 9.27MiB none
sudo btrfs qgroup show -e -f /var/lib/lxd/storage-pools/btrfs/containers-snapshots/c1/snap0
qgroupid rfer excl max_excl
-------- ---- ---- --------
0/275 634.02MiB 3.50MiB none
One thing that maybe a problem is that the snapshots appear to be accounted under a different qgroup ID. That doesn't seem right.