zfs icon indicating copy to clipboard operation
zfs copied to clipboard

`block size: 4096B configured, 16384B native` after replacing a disk with a zvol of a different zpool and then replacing with itself again (Its blocks are 4096, not 16384)

Open ipaqmaster opened this issue 7 months ago • 1 comments

System information

Type Version/Name
Distribution Name Ubuntu
Distribution Version 24.04 LTS (Noble Numbat)
Kernel Version 6.8.0-31-generic
Architecture x86_64
OpenZFS Version zfs-2.2.2-0ubuntu9 + zfs-kmod-2.2.2-0ubuntu9

Describe the problem you're observing

I am on a zfs root so I can't zpool export remotely then zpool import -ad /dev/disk/by-id to fix the naming convention of the member drives+partitions.

Due to https://github.com/openzfs/zfs/issues/2076 I also cannot replace a drive with its by-id self. Bummer.

So, with a USB3 HDD attached, partitioned as zpool backup and this 4-disk zpool being a raidz2 (tolerant of 2 disk failures/removals) I replaced /dev/sdc3 with a zvol on this backup zpool: backup/sdc3_replacement in order to replace it with itself after resilvering completed to the portable drive.

The single external drive as zpool backup was too small (115GiB) to be a replacement for my zpool, but a zvol of -B1T -s was able to re silver all 86GiB of used space of the main zpool without issue.

Upon replacing /dev/zvol/backup/sdc3_replacement back to /dev/disk/by-id/ata-sdcPreferredName-part3 I am now shown: block size: 4096B configured, 16384B native on the by-id-part3 name of this real ata disk despite replacing the other zvol that really was 16384B with the real disk.

Describe how to reproduce the problem

Replace /dev/sdX3 with an appropriately sized sparce zvol on another zpool (Using a zvol from the same zpool hard-locks the system and required manual intervention to reboot)

Wait for resilvering to complete.

Use the /dev/disk/by-id/ata-xxx-yyy-part3 path to replace the sacrificial zvol with the intended full by-id path of the original disk to complete the renaming.

Wait for resilvering to complete.

In zpool status still be shown the 16384b warning for a properly 4096b sectored drive.

Include any warning/errors/backtraces from the system logs

ipaqmaster avatar May 12 '25 10:05 ipaqmaster

I am on a zfs root so I can't zpool export remotely then zpool import -ad /dev/disk/by-id to fix the naming convention of the member drives+partitions.

To solve this problem: Either boot with ZFSBootMenu.org (a better bootloader for ZFS than grub), do the export/import dance, refresh the initrd of your system to freshen the contained stale /etc/zfs/zpool.cache file (replace it with the one created inside the ZFSBootMenu live system to make the new names stick when imported through zfs-import-cache.service) - no issue as you'll be in a running linux, so can simply mount the root dataset, copy that file, chroot into that and run update-initramfs`.

Or, while in your real system, zpool set cachefile=none rpool (to remove zpool.cache for this pool completely, stopping zfs-import-cache.service from using it) and setting ZPOOL_IMPORT_OPTS="-d /dev/disk/by-...in /etc/default/zfs (to have zfs-import-scan.service import using the device nodes from that path), update-initramfs and reboot.

Though both ways could break your boot, depending on distribution, in case zfs-import-cache.service /zfs-import-scan.service run prior to /dev/disk/by-* being fully populated by SystemDefunct udev, for performance reasons 🙄.

So better start with installing ZFSBootMenu in any case... which allows to fix most (if not all) things stopping the boot process, even should things go south deep and hard. Also this will remove any need for a dedicated root pool, though any separate /boot partitions have to be merged into the respective root filesystem(s) - plus all ZFS feature flag limitations imposed by grub will also be gone.

(Using a zvol from the same zpool hard-locks the system and required manual intervention to reboot)

That is to be expected, as the zvol would (likely) try to store itself in itself, at least in parts... this is where it deadlocks. IMHO ZFS should detect that kind of attempt of creating a circular dependency ("is the new device being added/attached a zvol of the pool itself?" should be answerable) and deny such suicidal requests.

GregorKopka avatar May 30 '25 23:05 GregorKopka