drbd icon indicating copy to clipboard operation
drbd copied to clipboard

9.2.0-rc-6 Kernel Panic

Open Smithx10 opened this issue 2 years ago • 6 comments

While Testing performance with nvme-tcp with DRBD I received a panic:

Client workload was the following FIO:

fio --time_based --name=benchmark --size=4G --runtime=300 --filename=./randwrite --ioengine=libaio --randrepeat=0 --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=8 --rw=randwrite --blocksize=4k --group_reporting

Client Side:

[root@drbd-linstor-2 nvm-test]# uname -a
Linux drbd-linstor-2 5.18.12-1.el8.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jul 15 07:10:46 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

[root@drbd-linstor-2 nvm-test]# modinfo nvme_tcp
filename:       /lib/modules/5.18.12-1.el8.elrepo.x86_64/kernel/drivers/nvme/host/nvme-tcp.ko.xz
license:        GPL v2
srcversion:     E2363EE73FF90031E8C71B3
depends:        nvme-core,nvme-fabrics
retpoline:      Y
intree:         Y
name:           nvme_tcp
vermagic:       5.18.12-1.el8.elrepo.x86_64 SMP preempt mod_unload modversions
parm:           so_priority:nvme tcp socket optimize priority (int)

ServerSide:

[root@ac-1f-6b-a4-df-ee ~]# uname -a
Linux ac-1f-6b-a4-df-ee 5.17.9-1.el8.elrepo.x86_64 #1 SMP PREEMPT Tue May 17 16:22:04 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux


nvmetcli ls
o- / ........................................................................................................... [...]
  o- hosts ..................................................................................................... [...]
  o- ports ..................................................................................................... [...]
  | o- 0 .................................... [trtype=tcp, traddr=10.91.230.214, trsvcid=4420, inline_data_size=16384]
  |   o- ana_groups ............................................................................................ [...]
  |   | o- 1 ....................................................................................... [state=optimized]
  |   o- referrals ............................................................................................. [...]
  |   o- subsystems ............................................................................................ [...]
  |     o- linbit:nvme:demo0 ................................................................................... [...]
  o- subsystems ................................................................................................ [...]
    o- linbit:nvme:demo0 ......................................... [version=1.3, allow_any=1, serial=c7499507d3254059]
      o- allowed_hosts ......................................................................................... [...]
      o- namespaces ............................................................................................ [...]
        o- 1  [path=/dev/drbd/by-res/demo0/1, uuid=2c73d2c7-e5b1-5ffd-94f2-dc51e384f49e, nguid=2c73d2c7-e5b1-5ffd-94f2-dc51e384f49e, grpid=1, enabled]


[root@ac-1f-6b-a4-df-ee ~]# modinfo drbd
filename:       /lib/modules/5.17.9-1.el8.elrepo.x86_64/extra/drbd/drbd.ko
alias:          block-major-147-*
license:        GPL
version:        9.2.0-rc.6
description:    drbd - Distributed Replicated Block Device v9.2.0-rc.6
author:         Philipp Reisner <[email protected]>, Lars Ellenberg <[email protected]>
srcversion:     860BC782A49F832D4EBBB95
depends:        libcrc32c
retpoline:      Y
name:           drbd
vermagic:       5.17.9-1.el8.elrepo.x86_64 SMP preempt mod_unload modversions
parm:           enable_faults:int
parm:           fault_rate:int
parm:           fault_count:int
parm:           fault_devs:int
parm:           disable_sendpage:bool
parm:           allow_oos:DONT USE! (bool)
parm:           minor_count:Approximate number of drbd devices (1-255) (uint)
parm:           usermode_helper:string
parm:           protocol_version_min:drbd_protocol_version



[root@ac-1f-6b-a4-df-ee ~]# modinfo zfs
filename:       /lib/modules/5.17.9-1.el8.elrepo.x86_64/extra/zfs.ko.xz
version:        2.1.4-1
license:        CDDL
author:         OpenZFS
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     1F136E6AD43697243DB8CE6
depends:        spl,znvpair,icp,zlua,zzstd,zunicode,zcommon,zavl
retpoline:      Y
name:           zfs
vermagic:       5.17.9-1.el8.elrepo.x86_64 SMP preempt mod_unload modversions
sig_id:         PKCS#7
signer:         DKMS module signing key
sig_key:        02:E0:9D:99:71:16:71:B9:8A:82:95:56:5E:D3:82:31:61:36:CC:89
sig_hashalgo:   sha512
signature:      9A:58:52:2F:CD:27:DB:CF:4E:36:94:5C:F9:17:18:BA:57:1C:22:35:
                F6:03:9B:A8:7B:C7:CD:A0:AC:AA:C1:FE:10:A5:85:0E:3F:6B:AA:44:
                8D:1C:75:F0:05:48:41:A9:13:2A:04:2C:8A:90:CE:FF:09:E5:4C:90:
                75:FD:F7:41:A7:BA:B1:BE:4A:67:9A:C1:C6:DE:53:05:58:54:25:3E:
                05:C0:BD:80:06:A8:FF:36:CD:15:3F:D7:BE:0D:70:AE:5F:E5:E1:2C:
                61:E3:D8:1A:7C:8C:5D:92:82:40:7A:C2:4A:8B:1C:E1:E0:48:DC:4C:
                92:04:BE:30:92:09:2A:F9:79:58:4D:C3:24:90:9B:51:B5:72:7D:20:
                96:01:85:A9:A5:AD:B3:0D:32:76:E7:67:63:FB:F9:48:4D:03:C9:3E:
                17:8B:C0:9E:F6:7C:D8:7C:1E:FD:E0:6F:E9:DC:E4:1E:9C:13:1D:63:
                65:2A:72:F1:FD:12:4A:2A:63:2E:81:03:64:29:38:D1:BA:5D:C6:7F:
                CA:11:70:D2:CC:3F:7A:66:DE:52:AD:F9:8D:ED:D5:22:76:A3:75:75:
                E2:FB:66:37:62:C3:7B:E3:0E:08:47:F6:FC:2A:AB:BC:6D:F7:73:73:
                F4:66:DF:4B:B7:39:B1:4C:DE:19:32:25:F3:D9:47:5F
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Max number of threads to handle I/O requests (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zfs_fallocate_reserve_percent:Percentage of length to use for the available capacity check (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           vdev_file_logical_ashift:Logical ashift for file-based devices (ulong)
parm:           vdev_file_physical_ashift:Physical ashift for file-based devices (ulong)
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_arc_shrinker_limit:Limit on number of pages that ARC shrinker can reclaim at once (int)
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow (int)
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs (int)
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage (int)
parm:           zil_replay_disable:Disable intent logging replay (int)
parm:           zil_nocacheflush:Disable ZIL cache flushes (int)
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size (int)
parm:           zfs_vnops_read_chunk_size:Bytes to read per chunk (ulong)
parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
parm:           zfs_max_nvlist_src_size:Maximum size in bytes allowed for src nvlist passed with ZFS ioctls (ulong)
parm:           zfs_history_output_max:Maximum size in bytes of ZFS ioctl output that will be logged (ulong)
parm:           zfs_zevent_retain_max:Maximum recent zevents records to retain for duplicate checking (uint)
parm:           zfs_zevent_retain_expire_secs:Expiration time for recent zevents records (uint)
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program (ulong)
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program (ulong)
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it (int)
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split (uint)
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped (uint)
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized (uint)
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM (uint)
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev (uint)
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device (int)
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device (int)
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span (int)
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang) (int)
parm:           zfs_rebuild_max_segment:Max segment size in bytes of rebuild reads (ulong)
parm:           zfs_rebuild_vdev_limit:Max bytes in flight per leaf vdev for sequential resilvers (ulong)
parm:           zfs_rebuild_scrub_enabled:Automatically scrub after sequential resilver completes (int)
parm:           zfs_vdev_raidz_impl:Select raidz implementation.
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media (int)
parm:           zfs_vdev_aggregate_trim:Allow TRIM I/O to be aggregated (int)
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev (int)
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev (int)
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev (int)
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev (int)
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_rebuild_max_active:Max active rebuild I/Os per vdev (int)
parm:           zfs_vdev_rebuild_min_active:Min active rebuild I/Os per vdev (int)
parm:           zfs_vdev_nia_credit:Number of non-interactive I/Os to allow in sequence (int)
parm:           zfs_vdev_nia_delay:Number of non-interactive I/Os before _max_active (int)
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O's (int)
parm:           zfs_initialize_value:Value written during zpool initialize (ulong)
parm:           zfs_initialize_chunk_size:Size in bytes of writes by zpool initialize (ulong)
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings (int)
parm:           zfs_condense_indirect_obsolete_pct:Minimum obsolete percent of bytes in the mapping to attempt condensing (int)
parm:           zfs_condense_min_mapping_bytes:Don't bother condensing if the mapping uses less than this amount of memory (ulong)
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing (ulong)
parm:           zfs_condense_indirect_commit_entry_delay_ms:Used by tests to ensure certain actions happen in the middle of a condense. A maximum value of 1 should be sufficient. (int)
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments (int)
parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_default_ms_shift:Default limit for metaslab size (int)
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev (int)
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second (uint)
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below zed threshold). (uint)
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub (int)
parm:           vdev_validate_skip:Bypass vdev_validate() (int)
parm:           zfs_nocacheflush:Disable cache flushes (int)
parm:           zfs_embedded_slog_min_ms:Minimum number of metaslabs required to dedicate one for log blocks (int)
parm:           zfs_vdev_min_auto_ashift:Minimum ashift used when creating new top-level vdevs
parm:           zfs_vdev_max_auto_ashift:Maximum ashift used when optimizing for logical -> physical sector size on new top-level vdevs
parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm:           zfs_read_history:Historical statistics for the last N reads (int)
parm:           zfs_read_history_hits:Include cache hits in read history (int)
parm:           zfs_txg_history:Historical statistics for the last N txgs (int)
parm:           zfs_multihost_history:Historical statistics for last N multihost writes (int)
parm:           zfs_flags:Set additional debugging flags (uint)
parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
parm:           zfs_deadman_enabled:Enable deadman timer (int)
parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
parm:           zfs_ddt_data_is_special:Place DDT data into the special class (int)
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class (int)
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available (int)
parm:           spa_slop_shift:Reserved free space in pool
parm:           zfs_unflushed_max_mem_amt:Specific hard-limit in memory that ZFS allows to be used for unflushed changes (ulong)
parm:           zfs_unflushed_max_mem_ppm:Percentage of the overall system memory that ZFS allows to be used for unflushed changes (value is calculated over 1000000 for finer granularity) (ulong)
parm:           zfs_unflushed_log_block_max:Hard limit (upper-bound) in the size of the space map log in terms of blocks. (ulong)
parm:           zfs_unflushed_log_block_min:Lower-bound limit for the maximum amount of blocks allowed in log spacemap (see zfs_unflushed_log_block_max) (ulong)
parm:           zfs_unflushed_log_block_pct:Tunable used to determine the number of blocks that can be used for the spacemap log, expressed as a percentage of the total number of metaslabs in the pool (e.g. 400 means the number of log blocks is capped at 4 times the number of metaslabs) (ulong)
parm:           zfs_max_log_walking:The number of past TXGs that the flushing algorithm of the log spacemap feature uses to estimate incoming log blocks (ulong)
parm:           zfs_max_logsm_summary_length:Maximum number of rows allowed in the summary of the spacemap log (ulong)
parm:           zfs_min_metaslabs_to_flush:Minimum number of metaslabs to flush per dirty TXG (ulong)
parm:           zfs_keep_log_spacemaps_at_export:Prevent the log spacemaps from being flushed and destroyed during pool export/destroy (int)
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm:           zfs_autoimport_disable:Disable pool import at module load (int)
parm:           zfs_spa_discard_memory_limit:Limit for memory used in prefetching the checkpoint space map done on each vdev while discarding the checkpoint (ulong)
parm:           spa_load_verify_shift:log2 fraction of arc that can be used by inflight I/Os when verifying pool during import (int)
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm:           spa_load_verify_data:Set to traverse data on pool import (int)
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import (int)
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm:           zio_taskq_batch_tpq:Number of threads per IO worker taskqueue (uint)
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) (ulong)
parm:           zfs_livelist_condense_zthr_pause:Set the livelist condense zthr to pause (int)
parm:           zfs_livelist_condense_sync_pause:Set the livelist condense synctask to pause (int)
parm:           zfs_livelist_condense_sync_cancel:Whether livelist condensing was canceled in the synctask (int)
parm:           zfs_livelist_condense_zthr_cancel:Whether livelist condensing was canceled in the zthr function (int)
parm:           zfs_livelist_condense_new_alloc:Whether extra ALLOC blkptrs were added to a livelist entry while it was being condensed (int)
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
parm:           metaslab_aliquot:Allocation granularity (a.k.a. stripe size) (ulong)
parm:           metaslab_debug_load:Load all metaslabs when pool is first opened (int)
parm:           metaslab_debug_unload:Prevent metaslabs from being unloaded (int)
parm:           metaslab_preload_enabled:Preload potential metaslabs during reassessment (int)
parm:           metaslab_unload_delay:Delay in txgs after metaslab was last used before unloading (int)
parm:           metaslab_unload_delay_ms:Delay in milliseconds after metaslab was last used before unloading (int)
parm:           zfs_mg_noalloc_threshold:Percentage of metaslab group size that should be free to make it eligible for allocation (int)
parm:           zfs_mg_fragmentation_threshold:Percentage of metaslab group size that should be considered eligible for allocations unless all metaslab groups within the metaslab class have also crossed this threshold (int)
parm:           zfs_metaslab_fragmentation_threshold:Fragmentation for metaslab to allow allocation (int)
parm:           metaslab_fragmentation_factor_enabled:Use the fragmentation metric to prefer less fragmented metaslabs (int)
parm:           metaslab_lba_weighting_enabled:Prefer metaslabs with lower LBAs (int)
parm:           metaslab_bias_enabled:Enable metaslab group biasing (int)
parm:           zfs_metaslab_segment_weight_enabled:Enable segment-based metaslab selection (int)
parm:           zfs_metaslab_switch_threshold:Segment-based metaslab selection maximum buckets before switching (int)
parm:           metaslab_force_ganging:Blocks larger than this size are forced to be gang blocks (ulong)
parm:           metaslab_df_max_search:Max distance (bytes) to search forward before using size tree (int)
parm:           metaslab_df_use_largest_segment:When looking in size tree, use largest segment instead of exact fit (int)
parm:           zfs_metaslab_max_size_cache_sec:How long to trust the cached max chunk size of a metaslab (ulong)
parm:           zfs_metaslab_mem_limit:Percentage of memory that can be used to store metaslab range trees (int)
parm:           zfs_metaslab_try_hard_before_gang:Try hard to allocate before ganging (int)
parm:           zfs_metaslab_find_max_tries:Normally only consider this many of the best metaslabs in each vdev (int)
parm:           zfs_zevent_len_max:Max event queue length (int)
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers (ulong)
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg (int)
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg (int)
parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing (int)
parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg (ulong)
parm:           zfs_max_async_dedup_frees:Max number of dedup blocks freed in one txg (ulong)
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit (int)
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size (int)
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method (int)
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval (int)
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os (ulong)
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit (int)
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention (int)
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans (int)
parm:           zfs_resilver_disable_defer:Process all resilvers immediately (int)
parm:           zfs_dirty_data_max_percent:Max percent of RAM allowed to be dirty (int)
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm:           zfs_delay_min_dirty_percent:Transaction delay threshold (int)
parm:           zfs_dirty_data_max:Determines the dirty space limit (ulong)
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm:           zfs_dirty_data_sync_percent:Dirty data txg sync threshold as a percentage of zfs_dirty_data_max (int)
parm:           zfs_delay_scale:How quickly delay approaches infinity (ulong)
parm:           zfs_sync_taskq_batch_pct:Max percent of CPUs that are used to sync dirty data (int)
parm:           zfs_zil_clean_taskq_nthr_pct:Max percent of CPUs that are used per dp_sync_taskq (int)
parm:           zfs_zil_clean_taskq_minalloc:Number of taskq entries that are pre-populated (int)
parm:           zfs_zil_clean_taskq_maxalloc:Max number of taskq entries that are cached (int)
parm:           zfs_livelist_max_entries:Size to start the next sub-livelist in a livelist (ulong)
parm:           zfs_livelist_min_percent_shared:Threshold at which livelist is disabled (int)
parm:           zfs_max_recordsize:Max allowed record size (int)
parm:           zfs_allow_redacted_dataset_mount:Allow mounting of redacted datasets (int)
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids (int)
parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm:           zfetch_max_distance:Max bytes to prefetch per stream (uint)
parm:           zfetch_max_idistance:Max bytes to prefetch indirects for per stream (uint)
parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm:           zfs_traverse_indirect_prefetch_limit:Traverse prefetch number of blocks pointed by indirect block (int)
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
parm:           zfs_send_queue_length:Maximum send queue length (int)
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks (int)
parm:           zfs_send_no_prefetch_queue_length:Maximum send queue length for non-prefetch queues (int)
parm:           zfs_send_queue_ff:Send queue fill fraction (int)
parm:           zfs_send_no_prefetch_queue_ff:Send queue fill fraction for non-prefetch queues (int)
parm:           zfs_override_estimate_recordsize:Override block size estimate with fixed size (int)
parm:           zfs_recv_queue_length:Maximum receive queue length (int)
parm:           zfs_recv_queue_ff:Receive queue fill fraction (int)
parm:           zfs_recv_write_batch_size:Maximum amount of writes to batch into one transaction (int)
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
parm:           zfs_per_txg_dirty_frees_percent:Percentage of dirtied blocks from frees in one TXG (ulong)
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
parm:           dmu_prefetch_max:Limit one prefetch call to this size (int)
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm:           zfs_dbuf_state_index:Calculate arc header index (int)
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of the dbuf metadata cache. (ulong)
parm:           dbuf_cache_shift:Set the size of the dbuf cache to a log2 fraction of arc size. (int)
parm:           dbuf_metadata_cache_shift:Set the size of the dbuf metadata cache to a log2 fraction of arc size. (int)
parm:           zfs_arc_min:Min arc size
parm:           zfs_arc_max:Max arc size
parm:           zfs_arc_meta_limit:Metadata limit for arc size
parm:           zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit
parm:           zfs_arc_meta_min:Min arc metadata
parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_evict_meta (int)
parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm:           zfs_arc_grow_retry:Seconds before growing arc size
parm:           zfs_arc_p_dampener_disable:Disable arc_p adapt dampener (int)
parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p
parm:           zfs_arc_average_blocksize:Target average block size (int)
parm:           zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms
parm:           l2arc_write_max:Max write bytes per interval (ulong)
parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm:           l2arc_headroom:Number of max device writes to precache (ulong)
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm:           l2arc_trim_ahead:TRIM ahead L2ARC write size multiplier (ulong)
parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
parm:           l2arc_norw:No reads during writes (int)
parm:           l2arc_meta_percent:Percent of ARC size allowed for L2ARC-only headers (int)
parm:           l2arc_rebuild_enabled:Rebuild the L2ARC when importing a pool (int)
parm:           l2arc_rebuild_blocks_min_l2size:Min size in bytes to write rebuild log blocks in L2ARC (ulong)
parm:           l2arc_mfuonly:Cache only MFU data from ARC into L2ARC (int)
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in arc
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
parm:           zfs_arc_eviction_pct:When full, ARC allocation waits for eviction of this % of alloc size (int)
parm:           zfs_arc_evict_batch_limit:The number of headers to evict per sublist before moving to the next (int)
parm:           zfs_arc_prune_task_threads:Number of arc_prune threads (int)

Panic Output:

[  637.965767] drbd demo0 ac-1f-6b-a4-df-ee: Preparing remote state change 665298238
[  638.041583] drbd demo0 ac-1f-6b-a4-df-ee: Committing remote state change 665298238 (primary_nodes=1)
[  639.376799] drbd demo0/0 drbd1000: new current UUID: FB35C925F1C156CD weak: FFFFFFFFFFFFFFFE
[  639.672337] nvmet: creating nvm controller 1 for subsystem linbit:nvme:demo0 for NQN nqn.2014-08.org.nvmexpress:uuid:a0ed6504-1546-40bd-a593-8bf9aa93fc58.
[  643.563056] drbd demo0 ac-1f-6b-a5-ab-ea: Handshake to peer 1 successful: Agreed network protocol version 121
[  643.573695] drbd demo0 ac-1f-6b-a5-ab-ea: Feature flags enabled on protocol level: 0x1f TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[  643.599194] drbd demo0 ac-1f-6b-a5-ab-ea: Peer authenticated using 20 bytes HMAC
[  643.648082] drbd demo0: Preparing cluster-wide state change 3344582659 (0->1 499/145)
[  643.670010] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: drbd_sync_handshake:
[  643.677270] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: self FB35C925F1C156CD:9FB8D3169D057002:0000000000000000:0000000000000000 bits:2 flags:120
[  643.690975] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: peer 9FB8D3169D057002:0000000000000000:341A0FE913FD4D82:0000000000000000 bits:0 flags:20
[  643.704611] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: uuid_compare()=source-use-bitmap by rule=bitmap-self
[  643.732998] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: drbd_sync_handshake:
[  643.740277] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: self 3B2E20FB38B2A4D8:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:120
[  643.754031] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: peer 3B2E20FB38B2A4D8:0000000000000000:A9837299945C2E84:0000000000000000 bits:0 flags:20
[  643.767722] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: uuid_compare()=no-sync by rule=lost-quorum
[  643.787591] drbd demo0: State change 3344582659: primary_nodes=1, weak_nodes=FFFFFFFFFFFFFFF8
[  643.796642] drbd demo0: Committing cluster-wide state change 3344582659 (148ms)
[  643.812867] drbd demo0 ac-1f-6b-a5-ab-ea: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
[  643.822690] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: repl( Off -> WFBitMapS )
[  643.830345] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: repl( Off -> Established )
[  643.838914] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 24(1), total 24; compression: 98.9%
[  643.867200] drbd demo0/1 drbd1001 ac-1f-6b-a5-ab-ea: pdsk( Outdated -> UpToDate )
[  643.883899] drbd demo0 ac-1f-6b-a5-ab-ea: Preparing remote state change 1952820562
[  643.945539] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 24(1), total 24; compression: 98.9%
[  643.967828] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: helper command: /sbin/drbdadm before-resync-source
[  643.986114] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: helper command: /sbin/drbdadm before-resync-source exit code 0
[  644.005485] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: pdsk( Outdated -> Inconsistent ) repl( WFBitMapS -> SyncSource )
[  644.016708] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: Began resync as SyncSource (will sync 8 KB [2 bits set]).
[  644.044848] drbd demo0 ac-1f-6b-a5-ab-ea: Committing remote state change 1952820562 (primary_nodes=1)
[  644.192065] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: updated UUIDs FB35C925F1C156CD:0000000000000000:9FB8D3169D057002:0000000000000000
[  644.214321] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: Resync done (total 1 sec; paused 0 sec; 8 K/sec)
[  644.224100] drbd demo0/0 drbd1000 ac-1f-6b-a5-ab-ea: pdsk( Inconsistent -> UpToDate ) repl( SyncSource -> Established )
[  788.660635] nvmet: creating nvm controller 1 for subsystem linbit:nvme:demo0 for NQN nqn.2014-08.org.nvmexpress:uuid:a0ed6504-1546-40bd-a593-8bf9aa93fc58.
[  788.921321] general protection fault, probably for non-canonical address 0xffedcf3970da3000: 0000 [#1] PREEMPT SMP NOPTI
[  788.932971] CPU: 24 PID: 971 Comm: kworker/24:1H Tainted: P S         OE     5.17.9-1.el8.elrepo.x86_64 #1
[  788.943410] Hardware name: Supermicro SYS-1029U-TN10RT/X11DPU, BIOS 3.2 10/16/2019
[  788.951736] Workqueue: nvmet_tcp_wq nvmet_tcp_io_work [nvmet_tcp]
[  788.958587] RIP: 0010:memcpy_erms+0x6/0x10
[  788.963437] Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
[  788.983681] RSP: 0018:ffffa6785bc03bb8 EFLAGS: 00010202
[  788.989637] RAX: ffedcf3970da3000 RBX: 000000000000021c RCX: 000000000000021c
[  788.997506] RDX: 000000000000021c RSI: ffff98a9d7ab89da RDI: ffedcf3970da3000
[  789.005365] RBP: ffff984ab05e6f80 R08: 0000000000000000 R09: ffffffff927e2740
[  789.013213] R10: 0000000000000000 R11: ffff98aa6fe9cf60 R12: 0000000000006168
[  789.021059] R13: 000000000000021c R14: 000000000000021c R15: 0000000000000000
[  789.028896] FS:  0000000000000000(0000) GS:ffff99083f500000(0000) knlGS:0000000000000000
[  789.037686] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  789.044137] CR2: 00007f13544ab000 CR3: 00000082b520a003 CR4: 00000000007706e0
[  789.051963] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  789.059775] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  789.067569] PKRU: 55555554
[  789.070932] Call Trace:
[  789.074034]  <TASK>
[  789.076778]  _copy_to_iter+0x3e6/0x6a0
[  789.081160]  ? __check_object_size+0x53/0x170
[  789.086130]  __skb_datagram_iter+0x19c/0x300
[  789.090993]  ? receiver_wake_function+0x20/0x20
[  789.096117]  skb_copy_datagram_iter+0x30/0x90
[  789.101060]  tcp_recvmsg_locked+0x1cd/0x8e0
[  789.105818]  tcp_recvmsg+0xa9/0x1e0
[  789.109874]  inet_recvmsg+0x5c/0x130
[  789.114014]  nvmet_tcp_io_work+0xcf/0xb05 [nvmet_tcp]
[  789.119630]  ? __switch_to_asm+0x42/0x70
[  789.124111]  ? finish_task_switch+0xb2/0x2c0
[  789.128922]  process_one_work+0x222/0x3f0
[  789.133466]  ? process_one_work+0x3f0/0x3f0
[  789.138181]  worker_thread+0x2d/0x3b0
[  789.142361]  ? process_one_work+0x3f0/0x3f0
[  789.147054]  kthread+0xd7/0x100
[  789.150686]  ? kthread_complete_and_exit+0x20/0x20
[  789.155970]  ret_from_fork+0x1f/0x30
[  789.160041]  </TASK>
[  789.162706] Modules linked in: tcp_diag(E) inet_diag(E) ext4(E) mbcache(E) jbd2(E) xt_multiport(E) nft_compat(E) nf_tables(E) nfnetlink(E) bcache(E) crc64(E) dm_cache(E) dm_persistent_data(E) dm_bio_prison(E) dm_bufio(E) dm_writecache(E) nvme_rdma(E) nvmet_rdma(E) rdma_cm(E) iw_cm(E) ib_cm(E) ib_core(E) dm_mod(E) drbd_transport_tcp(OE) drbd(OE) 8021q(E) garp(E) mrp(E) stp(E) llc(E) rfkill(E) sunrpc(E) intel_rapl_msr(E) intel_rapl_common(E) skx_edac(E) iTCO_wdt(E) intel_pmc_bxt(E) nfit(E) iTCO_vendor_support(E) libnvdimm(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) kvm(E) irqbypass(E) ipmi_ssif(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) rapl(E) intel_cstate(E) i2c_i801(E) mei_me(E) intel_uncore(E) pcspkr(E) acpi_ipmi(E) joydev(E) mei(E) lpc_ich(E) i2c_smbus(E) ioatdma(E) intel_pch_thermal(E) ipmi_si(E) acpi_power_meter(E) acpi_pad(E) vfat(E) fat(E) binfmt_misc(E) sr_mod(E) sd_mod(E) cdrom(E) sg(E) xfs(E) ast(E) i2c_algo_bit(E)
[  789.162757]  drm_vram_helper(E) libcrc32c(E) drm_kms_helper(E) syscopyarea(E) sysfillrect(E) sysimgblt(E) ahci(E) fb_sys_fops(E) drm_ttm_helper(E) ttm(E) libahci(E) uas(E) ixgbe(E) crc32c_intel(E) drm(E) libata(E) mdio(E) usb_storage(E) dca(E) wmi(E) zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) zavl(POE) icp(POE) zcommon(POE) znvpair(POE) spl(OE) nvmet_tcp(E) nvmet(E) nvme_tcp(E) nvme_fabrics(E) nvme(E) nvme_core(E) t10_pi(E) ipmi_devintf(E) ipmi_msghandler(E)
[  789.295168] BUG: kernel NULL pointer dereference, address: 0000000000000008
[  789.295182] ---[ end trace 0000000000000000 ]---
[  789.302663] #PF: supervisor write access in kernel mode
[  789.302665] #PF: error_code(0x0002) - not-present page
[  789.351645] RIP: 0010:memcpy_erms+0x6/0x10
[  789.353655] PGD 0 P4D 0
[  789.353657] Oops: 0002 [#2] PREEMPT SMP NOPTI
[  789.359322] Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
[  789.363915] CPU: 72 PID: 2493 Comm: kworker/72:1H Tainted: P S    D    OE     5.17.9-1.el8.elrepo.x86_64 #1
[  789.363918] Hardware name: Supermicro SYS-1029U-TN10RT/X11DPU, BIOS 3.2 10/16/2019
[  789.363919] Workqueue: nvmet_tcp_wq nvmet_tcp_io_work [nvmet_tcp]
[  789.366959] RSP: 0018:ffffa6785bc03bb8 EFLAGS: 00010202
[  789.371812]
[  789.371813] RIP: 0010:free_unref_page_commit.isra.119+0x62/0x100
[  789.391619]
[  789.401875] Code: 03 0d 22 03 d3 6d b9 04 00 00 00 83 fa 04 0f 4e ca 8d 04 49 48 8d 4f 08 01 f0 48 98 48 83 c0 01 48 c1 e0 04 4c 01 c8 48 8b 30 <48> 89 4e 08 48 89 47 10 48 89 77 08 48 89 08 b8 01 00 00 00 89 d1
[  789.401877] RSP: 0000:ffffa6785bdffdb8 EFLAGS: 00010086
[  789.401879] RAX: ffff99083fc368d8 RBX: ffffe02e065ae040 RCX: ffffe02e065ae048
[  789.409990] RAX: ffedcf3970da3000 RBX: 000000000000021c RCX: 000000000000021c
[  789.416612] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffe02e065ae040
[  789.416614] RBP: 000000000002d868 R08: ffff9909bffd4b80 R09: ffff99083fc368c8
[  789.416614] R10: ffff98a9ce02e5d0 R11: 0000000000000000 R12: 0000000000000000
[  789.416615] R13: 0000000000000293 R14: 000000007fffffff R15: 0000000000000018
[  789.416616] FS:  0000000000000000(0000) GS:ffff99083fc00000(0000) knlGS:0000000000000000
[  789.422389] RDX: 000000000000021c RSI: ffff98a9d7ab89da RDI: ffedcf3970da3000
[  789.424417] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  789.424419] CR2: 0000000000000008 CR3: 00000082b520a003 CR4: 00000000007706e0
[  789.424420] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  789.424420] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  789.424421] PKRU: 55555554
[  789.424422] Call Trace:
[  789.430979] RBP: ffff984ab05e6f80 R08: 0000000000000000 R09: ffffffff927e2740
[  789.433020]  <TASK>
[  789.433022]  free_unref_page+0x7d/0xe0
[  789.433024]  sgl_free_n_order+0x55/0x70
[  789.452937] R10: 0000000000000000 R11: ffff98aa6fe9cf60 R12: 0000000000006168
[  789.458732]  nvmet_tcp_free_cmd_buffers+0x28/0x50 [nvmet_tcp]
[  789.458736]  nvmet_tcp_io_work+0x4b8/0xb05 [nvmet_tcp]
[  789.466458] R13: 000000000000021c R14: 000000000000021c R15: 0000000000000000
[  789.474182]  ? __switch_to_asm+0x42/0x70
[  789.474185]  ? finish_task_switch+0xb2/0x2c0
[  789.481906] FS:  0000000000000000(0000) GS:ffff99083f500000(0000) knlGS:0000000000000000
[  789.489632]  process_one_work+0x222/0x3f0
[  789.489634]  ? process_one_work+0x3f0/0x3f0
[  789.489636]  worker_thread+0x2d/0x3b0
[  789.497368] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  789.505093]  ? process_one_work+0x3f0/0x3f0
[  789.505095]  kthread+0xd7/0x100
[  789.513786] CR2: 00007f13544ab000 CR3: 00000082b520a003 CR4: 00000000007706e0
[  789.521524]  ? kthread_complete_and_exit+0x20/0x20
[  789.521528]  ret_from_fork+0x1f/0x30
[  789.527880] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  789.535624]  </TASK>
[  789.535625] Modules linked in: tcp_diag(E) inet_diag(E) ext4(E) mbcache(E) jbd2(E)
[  789.543379] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  789.551121]  xt_multiport(E) nft_compat(E) nf_tables(E) nfnetlink(E) bcache(E) crc64(E) dm_cache(E) dm_persistent_data(E)
[  789.554450] PKRU: 55555554
[  789.557498]  dm_bio_prison(E) dm_bufio(E) dm_writecache(E) nvme_rdma(E) nvmet_rdma(E) rdma_cm(E) iw_cm(E)
[  789.565261] Kernel panic - not syncing: Fatal exception
[  789.567986]  ib_cm(E) ib_core(E) dm_mod(E) drbd_transport_tcp(OE) drbd(OE) 8021q(E) garp(E) mrp(E) stp(E) llc(E) rfkill(E) sunrpc(E) intel_rapl_msr(E) intel_rapl_common(E) skx_edac(E) iTCO_wdt(E) intel_pmc_bxt(E) nfit(E) iTCO_vendor_support(E) libnvdimm(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) kvm(E) irqbypass(E) ipmi_ssif(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) rapl(E) intel_cstate(E) i2c_i801(E) mei_me(E) intel_uncore(E) pcspkr(E) acpi_ipmi(E) joydev(E) mei(E) lpc_ich(E) i2c_smbus(E) ioatdma(E) intel_pch_thermal(E) ipmi_si(E) acpi_power_meter(E) acpi_pad(E) vfat(E) fat(E) binfmt_misc(E) sr_mod(E) sd_mod(E) cdrom(E) sg(E) xfs(E) ast(E) i2c_algo_bit(E) drm_vram_helper(E) libcrc32c(E) drm_kms_helper(E) syscopyarea(E) sysfillrect(E) sysimgblt(E) ahci(E) fb_sys_fops(E) drm_ttm_helper(E) ttm(E) libahci(E) uas(E) ixgbe(E) crc32c_intel(E) drm(E) libata(E) mdio(E) usb_storage(E) dca(E) wmi(E) zfs(POE) zunicode(POE) zzstd(OE) zlua(OE)
[  789.723178]  zavl(POE) icp(POE) zcommon(POE) znvpair(POE) spl(OE) nvmet_tcp(E) nvmet(E) nvme_tcp(E) nvme_fabrics(E) nvme(E) nvme_core(E) t10_pi(E) ipmi_devintf(E) ipmi_msghandler(E)
[  789.831341] CR2: 0000000000000008
[  789.835247] ---[ end trace 0000000000000000 ]---
[  789.867305] RIP: 0010:memcpy_erms+0x6/0x10
[  789.871995] Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
[  789.891941] RSP: 0018:ffffa6785bc03bb8 EFLAGS: 00010202
[  789.897768] RAX: ffedcf3970da3000 RBX: 000000000000021c RCX: 000000000000021c
[  789.905507] RDX: 000000000000021c RSI: ffff98a9d7ab89da RDI: ffedcf3970da3000
[  789.913247] RBP: ffff984ab05e6f80 R08: 0000000000000000 R09: ffffffff927e2740
[  789.920995] R10: 0000000000000000 R11: ffff98aa6fe9cf60 R12: 0000000000006168
[  789.928745] R13: 000000000000021c R14: 000000000000021c R15: 0000000000000000
[  789.936492] FS:  0000000000000000(0000) GS:ffff99083fc00000(0000) knlGS:0000000000000000
[  789.945201] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  789.951565] CR2: 0000000000000008 CR3: 00000082b520a003 CR4: 00000000007706e0
[  789.959322] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  789.967072] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  789.974818] PKRU: 55555554
[  790.634767] Shutting down cpus with NMI
[  790.639222] Kernel Offset: 0x11000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[  790.655463] ---[ end Kernel panic - not syncing: Fatal exception ]---

Smithx10 avatar Jul 22 '22 15:07 Smithx10

Took a look to see what kind of issues may have been reported in the Linux 5.17 changelogs in regards to nvme_tcp.

Saw a few mentions, and am going to attempt building 5.17.15.

arch@9d46dd2d-1178-ce83-cbc4-d396e4a24060 /v/t/issues ❯❯❯ ls -1
ChangeLog-5.17.1
ChangeLog-5.17.10
ChangeLog-5.17.11
ChangeLog-5.17.12
ChangeLog-5.17.13
ChangeLog-5.17.14
ChangeLog-5.17.15
ChangeLog-5.17.2
ChangeLog-5.17.3
ChangeLog-5.17.4
ChangeLog-5.17.5
ChangeLog-5.17.6
ChangeLog-5.17.7
ChangeLog-5.17.8
ChangeLog-5.17.9
arch@9d46dd2d-1178-ce83-cbc4-d396e4a24060 /v/t/issues ❯❯❯ rg nvme_tcp
ChangeLog-5.17.14
17719:    nvme_tcp_setup_ctrl
17721:      nvme_tcp_configure_io_queues

ChangeLog-5.17.10
3798:        Workqueue: nvme-wq nvme_tcp_reconnect_ctrl_work [nvme_tcp]
3823:         nvme_tcp_setup_ctrl+0x337/0x390 [nvme_tcp]
3824:         nvme_tcp_reconnect_ctrl_work+0x24/0x40 [nvme_tcp]

ChangeLog-5.17.2
7330:       #5: (&queue->send_mutex){+.+.}-{3:3}, at: nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp]

Smithx10 avatar Jul 22 '22 15:07 Smithx10

Had another Panic:

[348899.956155] drbd pvc-e92632a9-b649-4520-878b-b3fe4bbfbd66 ac-1f-6b-a4-df-ee: meta connection shut down by peer.
[348899.967030] drbd pvc-e92632a9-b649-4520-878b-b3fe4bbfbd66 ac-1f-6b-a4-df-ee: Terminating sender thread
[348899.979412] drbd pvc-2ab282a6-a09a-4ebe-9a9a-51a805cd4f7b: Terminating worker thread
[348899.987901] drbd pvc-e92632a9-b649-4520-878b-b3fe4bbfbd66 ac-1f-6b-a4-df-ee: Starting sender thread (from drbd_r_pvc-e926 [1436411])
[348900.032428] BUG: unable to handle page fault for address: 0000000064627274
[348900.040001] #PF: supervisor read access in kernel mode
[348900.045819] #PF: error_code(0x0000) - not-present page
[348900.051624] PGD 1ce44b067 P4D 1ce44b067 PUD 0
[348900.056723] Oops: 0000 [#1] PREEMPT SMP NOPTI
[348900.061719] CPU: 3 PID: 1436411 Comm: drbd_r_pvc-e926 Tainted: P S         OE     5.17.9-1.el8.elrepo.x86_64 #1
[348900.072456] Hardware name: Supermicro SYS-1029U-TN10RT/X11DPU, BIOS 3.2 10/16/2019
[348900.080683] RIP: 0010:drbd_free_pages+0x34/0x1f0 [drbd]
[348900.086586] Code: bc fa ff ff 41 56 41 55 4c 8d af b8 fa ff ff 41 54 55 53 48 83 ec 10 85 d2 49 0f 44 c5 48 89 04 24 48 85 f6 0f 84 55 01 00 00 <4c> 8b a7 58 f6 ff ff 49 89 fe 48 89 f3 41 89 d7 48 8b 6e 08 41 81
[348900.106613] RSP: 0018:ffffb03ef9527c88 EFLAGS: 00010286
[348900.112517] RAX: 00000000646276d8 RBX: ffff902d0f5f64b0 RCX: 0000000000000000
[348900.120322] RDX: 0000000000000001 RSI: fffff845c6900ac0 RDI: 0000000064627c1c
[348900.128130] RBP: ffff908eeff73800 R08: ffff902e0366b3e0 R09: ffff902e0366b3e0
[348900.135940] R10: ffff902e0366b3e0 R11: ffff902e0366b3e0 R12: 0000000000000001
[348900.143752] R13: 00000000646276d4 R14: ffff902e0366b35c R15: dead000000000122
[348900.151560] FS:  0000000000000000(0000) GS:ffff908a808c0000(0000) knlGS:0000000000000000
[348900.160326] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[348900.166742] CR2: 0000000064627274 CR3: 00000002624fe001 CR4: 00000000007706e0
[348900.168175] drbd pvc-cc0b63e9-4d96-4f6f-a033-bbcb301a763e ac-1f-6b-a4-df-ee: sock was shut down by peer
[348900.174158] drbd pvc-cc0b63e9-4d96-4f6f-a033-bbcb301a763e ac-1f-6b-a4-df-ee: meta connection shut down by peer.
[348900.174166] drbd pvc-cc0b63e9-4d96-4f6f-a033-bbcb301a763e ac-1f-6b-a4-df-ee: conn( Connected -> NetworkFailure ) peer( Secondary -> Unknown )
[348900.174547] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[348900.174548] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[348900.174549] PKRU: 55555554
[348900.174550] Call Trace:
[348900.174553]  <TASK>
[348900.184658] drbd pvc-cc0b63e9-4d96-4f6f-a033-bbcb301a763e ac-1f-6b-a4-df-ee: Terminating sender thread
[348900.195397]  __drbd_free_peer_req+0x48/0x120 [drbd]
[348900.209338] drbd pvc-cc0b63e9-4d96-4f6f-a033-bbcb301a763e ac-1f-6b-a4-df-ee: Starting sender thread (from drbd_r_pvc-cc0b [1436388])
[348900.217143]  drbd_finish_peer_reqs+0xbc/0x170 [drbd]
[348900.268423]  drain_resync_activity+0x7b/0x860 [drbd]
[348900.274052]  ? _get_ldev_if_state.part.56+0x100/0x100 [drbd]
[348900.280376]  ? wake_up_q+0x49/0x90
[348900.284441]  ? __mutex_unlock_slowpath.isra.24+0x91/0x100
[348900.290489]  ? _get_ldev_if_state.part.56+0x100/0x100 [drbd]
[348900.296796]  conn_disconnect+0x192/0xb20 [drbd]
[348900.301981]  ? _get_ldev_if_state.part.56+0x100/0x100 [drbd]
[348900.308295]  ? _get_ldev_if_state.part.56+0x100/0x100 [drbd]
[348900.314592]  drbd_receiver+0x3b5/0x880 [drbd]
[348900.319583]  ? __drbd_next_peer_device_ref+0x1a0/0x1a0 [drbd]
[348900.325951]  drbd_thread_setup+0x76/0x1c0 [drbd]
[348900.331197]  ? __drbd_next_peer_device_ref+0x1a0/0x1a0 [drbd]
[348900.337571]  kthread+0xd7/0x100
[348900.341312]  ? kthread_complete_and_exit+0x20/0x20
[348900.346692]  ret_from_fork+0x1f/0x30
[348900.350843]  </TASK>
[348900.353601] Modules linked in: bcache(E) crc64(E) dm_cache(E) dm_persistent_data(E) dm_bio_prison(E) dm_bufio(E) dm_writecache(E) nvme_rdma(E) nvmet_rdma(E) rdma_cm(E) iw_cm(E) ib_cm(E) ib_core(E) tcp_diag(E) inet_diag(E) ext4(E) mbcache(E) jbd2(E) xt_multiport(E) nft_compat(E) nf_tables(E) nfnetlink(E) 8021q(E) garp(E) mrp(E) stp(E) llc(E) rfkill(E) sunrpc(E) intel_rapl_msr(E) intel_rapl_common(E) skx_edac(E) nfit(E) libnvdimm(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) iTCO_wdt(E) intel_pmc_bxt(E) iTCO_vendor_support(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) rapl(E) intel_cstate(E) ipmi_ssif(E) mei_me(E) intel_uncore(E) pcspkr(E) i2c_i801(E) acpi_ipmi(E) joydev(E) lpc_ich(E) mei(E) i2c_smbus(E) intel_pch_thermal(E) ioatdma(E) ipmi_si(E) acpi_pad(E) acpi_power_meter(E) vfat(E) fat(E) binfmt_misc(E) dm_mod(E) sr_mod(E) cdrom(E) sd_mod(E) sg(E) ast(E) i2c_algo_bit(E) drm_vram_helper(E) drm_kms_helper(E) syscopyarea(E)
[348900.353645]  xfs(E) sysfillrect(E) sysimgblt(E) fb_sys_fops(E) drm_ttm_helper(E) ttm(E) ahci(E) libahci(E) uas(E) drm(E) ixgbe(E) libata(E) mdio(E) usb_storage(E) dca(E) wmi(E) zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) zavl(POE) icp(POE) zcommon(POE) znvpair(POE) spl(OE) nvmet_tcp(E) nvmet(E) nvme_tcp(E) nvme_fabrics(E) nvme(E) nvme_core(E) t10_pi(E) ipmi_devintf(E) ipmi_msghandler(E) drbd_transport_tcp(OE) drbd(OE) libcrc32c(E) crc32c_intel(E)
[348900.486652] CR2: 0000000064627274
[348900.490644] ---[ end trace 0000000000000000 ]---
[348900.527685] RIP: 0010:drbd_free_pages+0x34/0x1f0 [drbd]
[348900.533597] Code: bc fa ff ff 41 56 41 55 4c 8d af b8 fa ff ff 41 54 55 53 48 83 ec 10 85 d2 49 0f 44 c5 48 89 04 24 48 85 f6 0f 84 55 01 00 00 <4c> 8b a7 58 f6 ff ff 49 89 fe 48 89 f3 41 89 d7 48 8b 6e 08 41 81
[348900.553631] RSP: 0018:ffffb03ef9527c88 EFLAGS: 00010286
[348900.559532] RAX: 00000000646276d8 RBX: ffff902d0f5f64b0 RCX: 0000000000000000
[348900.567349] RDX: 0000000000000001 RSI: fffff845c6900ac0 RDI: 0000000064627c1c
[348900.575164] RBP: ffff908eeff73800 R08: ffff902e0366b3e0 R09: ffff902e0366b3e0
[348900.582972] R10: ffff902e0366b3e0 R11: ffff902e0366b3e0 R12: 0000000000000001
[348900.590773] R13: 00000000646276d4 R14: ffff902e0366b35c R15: dead000000000122
[348900.598581] FS:  0000000000000000(0000) GS:ffff908a808c0000(0000) knlGS:0000000000000000
[348900.607347] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[348900.613771] CR2: 0000000064627274 CR3: 00000002624fe001 CR4: 00000000007706e0
[348900.621597] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[348900.629411] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[348900.637211] PKRU: 55555554
[348900.640578] Kernel panic - not syncing: Fatal exception
[348900.646488] Kernel Offset: 0x1ce00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[348900.677714] ---[ end Kernel panic - not syncing: Fatal exception ]---


Smithx10 avatar Jul 27 '22 12:07 Smithx10

Thanks for posting these logs. I will look into them in due course.

JoelColledge avatar Aug 02 '22 08:08 JoelColledge

Hi @Smithx10, I've looked at these logs. The first panic seems to be unrelated to DRBD, but the second looks related. I'm guessing it is caused by some race condition between the connection loss code and the resync request completing. Unfortunately there isn't enough information to work out how this might be happening.

Do you still have the preceding kernel logs?

Are you able to reproduce the problem?

JoelColledge avatar Aug 23 '22 13:08 JoelColledge

Good Morning @JoelColledge ,

Sorry at the moment I don't have that build running.

Is there a better way to capture more information for you or settings I can add to these storage nodes?

Smithx10 avatar Aug 23 '22 14:08 Smithx10

@Smithx10

Is there a better way to capture more information for you or settings I can add to these storage nodes?

In this case simply collecting a longer section of the kernel logs might help. A packet trace would be very helpful, but I know that collecting this is often not feasible.

JoelColledge avatar Aug 30 '22 12:08 JoelColledge

This may have been fixed by https://github.com/LINBIT/drbd/commit/f6a70dc080ed5db90496b695d099c69885c529ca. Please test with the latest master @Smithx10.

If you are not able to test at the moment, I will close this issue assuming it has been fixed.

JoelColledge avatar Jan 30 '23 10:01 JoelColledge

Closing due to the lack of a response. Assumed to be fixed in drbd-9.2.2.

JoelColledge avatar Mar 08 '23 13:03 JoelColledge