zfs icon indicating copy to clipboard operation
zfs copied to clipboard

zpool: Add zpool status -vv error ranges

Open tonyhutter opened this issue 2 months ago • 2 comments

Motivation and Context

Print error byte ranges with zpool status -vv

Description

Print the byte error ranges with 'zpool status -vv'. This works with all the normal zpool status formatting flags: -p, -j, --json-int

In addition:

  • Move range_tree/btree to common userspace/kernel code.
  • Modify ZFS_IOC_OBJ_TO_STATS ioctl to optionally return "extended" object stats.
  • Let zinject corrupt zvol data.
  • Add test case.

This commit takes code from these PRs: #17502 #9781 #8902

How Has This Been Tested?

Test case added

Types of changes

  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Performance enhancement (non-breaking change which improves efficiency)
  • [ ] Code cleanup (non-breaking change which makes code smaller or more readable)
  • [ ] Quality assurance (non-breaking change which makes the code more robust against bugs)
  • [ ] Breaking change (fix or feature that would cause existing functionality to change)
  • [ ] Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • [ ] Documentation (a change to man pages or other documentation)

Checklist:

  • [ ] My code follows the OpenZFS code style requirements.
  • [ ] I have updated the documentation accordingly.
  • [ ] I have read the contributing document.
  • [ ] I have added tests to cover my changes.
  • [ ] I have run the ZFS Test Suite with this change applied.
  • [ ] All commit messages are properly formatted and contain Signed-off-by.

tonyhutter avatar Oct 22 '25 18:10 tonyhutter

Sample output:

$ zpool status -vv

  pool: testpool                                                     
 state: ONLINE                                                       
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.                       
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.                                         
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A       
  scan: scrub repaired 0B in 00:00:00 with 3 errors on Tue Oct 21 17:22:20 2025
config:                                                              
                                                                     
    NAME        STATE     READ WRITE CKSUM                           
    testpool    ONLINE       0     0     0                           
      loop0     ONLINE       3     0    18                           
                                                                     
errors: Permanent errors have been detected in the following files:  
                                                                     
        <metadata>:<0x1> (no ranges)                                 
        /testpool/4k/4k_file1 0-4.00K                                
        /testpool/4k/4k_file2 0-4.00K,16K-20.0K,24K-28.0K            
        /testpool/1m/1m_file 1M-2.00M                                
        testpool/testvol:<0x1> 3.91M-3.91M,4.30M-4.30M    
$ zpool status -vvjp --json-int | jq
...
      "errors": {
        "<metadata>:<0x1>": {
          "name": "<metadata>:<0x1>",
          "object": 1,
          "dataset": 0
        },
        "/testpool/4k/4k_file1": {
          "object_type": "ZFS plain file",
          "ranges": [
            {
              "start_byte": 0,
              "end_byte": 4095
            }
          ],
          "name": "/testpool/4k/4k_file1",
          "object": 2,
          "dataset": 262,
          "block_size": 4096
        },
        "/testpool/4k/4k_file2": {
          "object_type": "ZFS plain file",
          "ranges": [
            {
              "start_byte": 0,
              "end_byte": 4095
            },
            {
              "start_byte": 16384,
              "end_byte": 20479
            },
            {
              "start_byte": 24576,
              "end_byte": 28671
            }
          ],
          "name": "/testpool/4k/4k_file2",
          "object": 128,
          "dataset": 262,
          "block_size": 4096
        },
        "/testpool/1m/1m_file": {
          "object_type": "ZFS plain file",
          "ranges": [
            {
              "start_byte": 1048576,
              "end_byte": 2097151
            }
          ],
          "name": "/testpool/1m/1m_file",
          "object": 2,
          "dataset": 270,
          "block_size": 1048576
        },
        "testpool/testvol:<0x1>": {
          "object_type": "zvol",
          "ranges": [
            {
              "start_byte": 4096000,
              "end_byte": 4100095
            },
            {
              "start_byte": 4505600,
              "end_byte": 4509695
            }
          ],
          "name": "testpool/testvol:<0x1>",
          "object": 1,
          "dataset": 278,
          "block_size": 4096
        }
      }
    }
  }
}

tonyhutter avatar Oct 22 '25 18:10 tonyhutter

Just the filenames (JSON):

$ zpool status -vj
...
      "errors": {
        "<metadata>:<0x1>": {
          "name": "<metadata>:<0x1>"
        },
        "/testpool/4k/4k_file1": {
          "name": "/testpool/4k/4k_file1"
        },
        "/testpool/4k/4k_file2": {
          "name": "/testpool/4k/4k_file2"
        },
        "/testpool/1m/1m_file": {
          "name": "/testpool/1m/1m_file"
        },
        "testpool/testvol:<0x1>": {
          "name": "testpool/testvol:<0x1>"
        }
      }
    }
  }

tonyhutter avatar Oct 22 '25 18:10 tonyhutter