zpool_influxdb icon indicating copy to clipboard operation
zpool_influxdb copied to clipboard

vdev tag set under the zpool_stats measurement points to incorrect vdev

Open toddhpoole opened this issue 4 years ago • 1 comments

It looks like vdev tag set from zpool_stats is incorrectly reporting RAIDZ2 vdevs as RAIDZ. The same may also be occurring for RAIDZ3, though I've not tested it.

Quick dump of zpool status:

[root@testnas0 ~]# zpool status
  pool: testtank0
 state: ONLINE
  scan: scrub canceled on Sat Sep  4 20:49:06 2021
config:

    NAME                        STATE     READ WRITE CKSUM
    testtank0                   ONLINE       0     0     0
      raidz2-0                  ONLINE       0     0     0
        wwn-0x5000c50087cfcfcc  ONLINE       0     0     0
        wwn-0x5000cca252cef0ed  ONLINE       0     0     0
        wwn-0x5000cca252cef787  ONLINE       0     0     0
        wwn-0x5000cca252d5cd37  ONLINE       0     0     0
        wwn-0x5000cca252e5780c  ONLINE       0     0     0
        wwn-0x5000cca252e6397d  ONLINE       0     0     0
        wwn-0x5000cca252e6406a  ONLINE       0     0     0
        wwn-0x5000cca252e650ad  ONLINE       0     0     0
        wwn-0x5000cca252e6532e  ONLINE       0     0     0
        wwn-0x5000cca257e84870  ONLINE       0     0     0
        wwn-0x5000cca257e88e13  ONLINE       0     0     0
        wwn-0x5000cca257e97fc8  ONLINE       0     0     0
      raidz2-1                  ONLINE       0     0     0
        wwn-0x5000c5008791c597  ONLINE       0     0     0
        wwn-0x5000cca257e9edbe  ONLINE       0     0     0
        wwn-0x5000cca257f4a158  ONLINE       0     0     0
        wwn-0x5000cca257f4cbd4  ONLINE       0     0     0
        wwn-0x5000cca257f4cf5a  ONLINE       0     0     0
        wwn-0x5000cca257f52e5c  ONLINE       0     0     0
        wwn-0x5000cca257f6499f  ONLINE       0     0     0
        wwn-0x5000cca266ea1143  ONLINE       0     0     0
        wwn-0x5000cca267ed86ef  ONLINE       0     0     0
        wwn-0x5000cca26bd5cbff  ONLINE       0     0     0
        wwn-0x5000cca26bdda99f  ONLINE       0     0     0
        wwn-0x5000cca26cc67e04  ONLINE       0     0     0

errors: No known data errors

Note that the pool is comprised of two RAIDZ2 vdevs: raidz2-0 and raidz2-1.

First few lines of zpool_influxdb output:

[root@testnas0 ~]# /usr/libexec/zfs/zpool_influxdb
zpool_stats,name=testtank0,state=ONLINE,vdev=root alloc=164827933913088u,free=27208643825664u,size=192036577738752u,read_bytes=33456128u,read_errors=0u,read_ops=5146u,write_bytes=32968704u,write_errors=0u,write_ops=2414u,checksum_errors=0u,fragmentation=0u 1631479854554473239
zpool_stats,name=testtank0,state=ONLINE,vdev=root/raidz-0 alloc=83201563766784u,free=12816725102592u,size=96018288869376u,read_bytes=16560128u,read_errors=0u,read_ops=2527u,write_bytes=17223680u,write_errors=0u,write_ops=1296u,checksum_errors=0u,fragmentation=0u 1631479854554473239
zpool_stats,name=testtank0,state=ONLINE,path=/dev/disk/by-id/wwn-0x5000c50087cfcfcc-part1,vdev=root/raidz-0/disk-0 alloc=0u,free=0u,size=0u,read_bytes=757760u,read_errors=0u,read_ops=69u,write_bytes=1495040u,write_errors=0u,write_ops=115u,checksum_errors=0u,fragmentation=0u 1631479854554473239
zpool_stats,name=testtank0,state=ONLINE,path=/dev/disk/by-id/wwn-0x5000cca252cef0ed-part1,vdev=root/raidz-0/disk-1 alloc=0u,free=0u,size=0u,read_bytes=684032u,read_errors=0u,read_ops=41u,write_bytes=1433600u,write_errors=0u,write_ops=104u,checksum_errors=0u,fragmentation=0u 1631479854554473239
zpool_stats,name=testtank0,state=ONLINE,path=/dev/disk/by-id/wwn-0x5000cca252cef787-part1,vdev=root/raidz-0/disk-2 alloc=0u,free=0u,size=0u,read_bytes=2568192u,read_errors=0u,read_ops=501u,write_bytes=1441792u,write_errors=0u,write_ops=106u,checksum_errors=0u,fragmentation=0u 1631479854554473239
... snip snip ...

Note how the various vdev tags refer to root/raidz-0 (the "2" at the end of "raidz" is missing).

Might want to double check whatever logic is responsible for constructing that tag set. Solid tool otherwise. I'm currently replacing a fairly complex (and fragile) custom Bash script I've been maintaining since 2008 that screen scrapes zpool status with zpool_influxdb and I couldn't be more excited. Finally - no more having to tweak my scripts each time some cool new feature gets added to ZFS and the output of zpool status changes slightly!

toddhpoole avatar Sep 12 '21 23:09 toddhpoole

The zpool command does decorate the parity into the vdev name, as you've noticed. We could do this as well. However the important part is the instance number -#

richardelling avatar Sep 13 '21 17:09 richardelling