Jim Salter

Results 51 comments of Jim Salter

I appreciate the fix, but I'm not happy with piping through grep. This introduces new and exciting ways for things to fail on different distros when grep may or may...

The contents of `events` had already rolled over, so I had to repeat the steps and corrupt another pool. Here's the output you requested, immediately following corrupting the new pool...

Update: actually creating the million files is unnecessary. Failure occurs with just creating the pool, adding the specials, and exporting the pool. ```` root@jrs-dr0:/home/jrs# ls /dev/disk/by-id | grep ST120 |...

Also works with a pool with only one single-disk vdev (Ironwolf) and one single-disk special vdev (Optane). ```` root@jrs-dr0:/home/jrs# zpool status test pool: test state: ONLINE scan: none requested config:...

Wait, are you seeing what I'm seeing? The `zpool status` prior to export shows a data vdev and a `special` vdev. The `zpool import` AFTER the export shows two single-disk...

If you look back across my earlier reports in this issue, you'll see the same thing on each of them: `zpool status` shows the `special` vdev as a `special` prior...

By contrast, if I create a test pool out of sparse files, both `zpool status` while the pool is live **and** `zpool import` after it's exported show the vdev as...

Seems unlikely, given that it has no trouble at all with the same disks in the **absence** of a `special`, and that the problem also occurs if the `special` is...

Pools export and import fine from disks attached to the LSI, as long as they don't include a `special`: ```` root@jrs-dr0:/tmp# zpool create -oashift=12 test scsi-SATA_ST12000VN0007-2G_ZCH0BSFK root@jrs-dr0:/tmp# zpool status test...

So, to sum up: * one data vdev attached to LSI, one special attached to LSI: **corrupt on export/import** * one data vdev attached to LSI, one special attached to...