chrisrd

Results 12 comments of chrisrd

The "more efficient ways" are probably as done by `zpool iostat`. That uses an zfs ioctl to pull in at least the `nread`, `nwritten`, `reads` and `writes` part of the...

Looks like it's not straight forward (you've gotta pull apart "nvlists" and ...stuff), but someone's started on it: https://github.com/lorenz/go-zfs

There's another "go-zfs" project at https://github.com/mistifyio/go-zfs but that one appears to be a wrapper around the command line utilities so not suitable given "We don't allow running external commands".

Huh. And another at https://github.com/bicomsystems/go-libzfs which links to the libzfs c library so less suitable.

@discostur Nothing new from me - I'm not in a position to fix this. In the meantime I'm just monitoring one of the disks in my pool with: ``` irate(node_disk_read_bytes_total{device="sdf"}[5m])...

Just piping up to mention Ceph object storage as another S3 compatible object store: https://ceph.io/en/discover/technology/#object We've been considering putting ZFS on a Ceph block store to complete our storage consolidation...

Have you tried increasing `zfs_arc_dnode_limit_percent` (or `zfs_arc_dnode_limit`) to avoid flushing the dnodes too aggressively? (See also `man zfs-module-parameters`)

Possibly related: #10331 - fix dnode eviction typo in arc_evict_state() #10563 - dbuf cache size is 1/32nd what was intended #10600 - Revise ARC shrinker algorithm #10610 - Limit dbuf...

I don't agree that #2451 has resolved this issue: it fixes the case that the missing zpool IO metrics would cause the rest of the (otherwise available) zpool metrics to...

Thanks @siebenmann, I'll certainly take a look at [siebenmann/zfs_exporter](https://github.com/siebenmann/zfs_exporter). For my use-case, I don't specifically need "the same as before" IO stats, just _some_ stats that allow monitoring the pool...