Ceph Fails to Create OSD with device of type crypt
Yes
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior: Unable to create an OSD from a crypt type device
Expected behavior: According to this table it says crypt devices are an allowed configuration unless I read it incorrectly: https://rook.io/docs/rook/v1.17/CRDs/Cluster/ceph-cluster-crd/#osd-configuration-settings
How to reproduce it (minimal and precise):
Create a LUKS volume for an entire disk and pass the udev path in the CephCluster CRD View osd-prepare logs and confirm device is skipped
File(s) to submit:
- Cluster CR (custom resource), typically called
cluster.yaml, if necessary
Logs to submit: osd-prepare log snippet: cephosd: skipping device "dm-8": ["Device type is not acceptable. It should be raw device or partition"].
-
Operator's logs, if necessary
-
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>When pasting logs, always surround them with backticks or use theinsert codebutton from the Github UI. Read GitHub documentation if you need help.
Cluster Status to submit:
-
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph healthTo get the status of the cluster, usekubectl rook-ceph ceph statusFor more details, see the Rook kubectl Plugin
Environment:
- OS (e.g. from /etc/os-release): Rocky 8
- Kernel (e.g.
uname -a): 4.18.0-553.16.1.el8_10.x86_64 - Cloud provider or hardware configuration: Baremetal
- Rook version (use
rook versioninside of a Rook Pod): 1.17.1 - Storage backend version (e.g. for ceph do
ceph -v): 19.2.1 - Kubernetes version (use
kubectl version): 1.31.1 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Kubeadm created cluster
- Storage backend status (e.g. for Ceph use
ceph healthin the [Rook Ceph toolbox] (https://rook.io/docs/rook/latest-release/Troubleshooting/ceph-toolbox/#interactive-toolbox)): OSDs not created so none, trying to use local disks on node
To get around this issue I was using LVM to create an unformatted LV to pass to Ceph, but then I noticed some poor fio benchmarking so wanted to create a metadata device, but ran into an error and found in the configuration that is not supported if the OSD device is lvm.
My requirements are full disk encryption if possible or could look at just ceph data encryption but didn't see in the docs how to configure that with rook.
Crypt devices are expected to work, added a long time ago with #5581. @satoru-takeuchi any ideas?
@chrisblatt Could you provide cluster.yaml and the log of the prepare pod?
@satoru-takeuchi here is the cluster.yaml, note I generalized the node names as they follow a longer naming standard that I thought might be more confusing than helpful.
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph # namespace:cluster
spec:
cephVersion:
image: quay.io/ceph/ceph:v19.2.1
allowUnsupported: false
dataDirHostPath: /data.ssd/rook
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
waitTimeoutForHealthyOSDInMinutes: 10
upgradeOSDRequiresHealthyPGs: false
mon:
count: 3
allowMultiplePerNode: false
mgr:
count: 2
allowMultiplePerNode: false
modules:
- name: rook
enabled: true
dashboard:
enabled: true
ssl: true
monitoring:
enabled: false
metricsDisabled: false
exporter:
perfCountersPrioLimit: 5
statsPeriodSeconds: 5
network:
connections:
encryption:
enabled: false
compression:
enabled: false
requireMsgr2: false
crashCollector:
disable: false
logCollector:
enabled: true
periodicity: daily # one of: hourly, daily, weekly, monthly
maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M.
cleanupPolicy:
confirmation: ""
sanitizeDisks:
method: quick
dataSource: zero
iteration: 1
allowUninstallWithVolumes: false
removeOSDsIfOutAndSafeToRemove: false
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
storage: # cluster level storage configuration and selection
useAllNodes: false
useAllDevices: false
allowDeviceClassUpdate: false # whether to allow changing the device class of an OSD after it is created
allowOsdCrushWeightUpdate: false # whether to allow resizing the OSD crush weight after osd pvc is increased
nodes:
- name: n1
devices:
- name: /dev/mapper/luks--data--vg-data
- name: n2
devices:
- config:
metadataDevice: /dev/mapper/luks--journal-vg-journal
name: /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-0031852c91e44768aa378f6ddc42a962-data
- name: n3
devices:
- name: /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-0c2161a19337462db6ae493624bb7e9f-data
config:
metadataDevice: /dev/mapper/luks--journal-vg-journal
scheduleAlways: false
onlyApplyOSDPlacement: false
disruptionManagement:
managePodBudgets: true
osdMaintenanceTimeout: 30
pgHealthCheckTimeout: 0
csi:
readAffinity:
enabled: true
# healthChecks
# Valid values for daemons are 'mon', 'osd', 'status'
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
osd:
disabled: false
interval: 60s
status:
disabled: false
interval: 60s
# Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons.
livenessProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
# Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons.
startupProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
I have the same error on a slightly different set of versions:
- OS (e.g. from /etc/os-release): Debian GNU/Linux 12 (bookworm)
- Kernel (e.g.
uname -a): 6.1.0-35-amd64 - Cloud provider or hardware configuration: bare metal
- Rook version (use
rook versioninside of a Rook Pod): v1.17.2 - Storage backend version (e.g. for ceph do
ceph -v): 19.2.2 - Kubernetes version (use
kubectl version): v1.32.4+rke2r1 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): RKE2
- Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox): none, no OSDs
My configs and logs are below, in case they can help debug. This is a small non-production cluster (hence mgrs/mons on the same node), so I'm happy to tweak and try things if it's helpful.
Operator config
Helm Chart, via Kustomize:
---
apiVersion: "kustomize.config.k8s.io/v1beta1"
kind: "Kustomization"
helmCharts:
- name: "rook-ceph"
releaseName: "release"
repo: "https://charts.rook.io/release"
namespace: "rook-ceph"
version: "1.17.2"
kubeVersion: "1.32.4"
valuesInline:
crds:
enabled: true
csi:
enableRbdDriver: false
enableCephfsDriver: true
Cluster config
apiVersion: "ceph.rook.io/v1"
kind: "CephCluster"
metadata:
namespace: "rook-ceph"
name: "rook-ceph"
spec:
cephVersion:
image: "quay.io/ceph/ceph:v19.2.2"
allowUnsupported: false
dataDirHostPath: "/var/lib/rook"
storage:
useAllNodes: false
useAllDevices: false
nodes:
- name: "lab05"
devices:
- name: "/dev/disk/by-id/dm-name-cryptdata0"
- name: "lab06"
devices:
- name: "/dev/disk/by-id/dm-name-cryptdata0"
mon:
count: 3
allowMultiplePerNode: true # TODO
mgr:
count: 2
allowMultiplePerNode: true # TODO
modules:
- name: rook
enabled: true
dashboard:
enabled: true
ssl: false
monitoring:
enabled: false
crashCollector:
disable: false
daysToRetain: 14
logCollector:
enabled: true
periodicity: "daily"
maxLogSize: "500M"
cleanupPolicy:
allowUninstallWithVolumes: false
confirmation: ""
sanitizeDisks:
method: "quick"
dataSource: "zero"
iteration: 1
removeOSDsIfOutAndSafeToRemove: false
priorityClassNames:
mon: "system-node-critical"
osd: "system-node-critical"
mgr: "system-cluster-critical"
disruptionManagement:
managePodBudgets: true
OSD prepare logs
2025/05/18 17:46:03 maxprocs: Leaving GOMAXPROCS=6: CPU quota undefined
2025-05-18 17:46:03.145257 I | cephcmd: desired devices to configure osds: [{Name:/dev/disk/by-id/dm-name-cryptdata0 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false}]
2025-05-18 17:46:03.145905 I | rookcmd: starting Rook v1.17.2 with arguments '/rook/rook ceph osd provision'
2025-05-18 17:46:03.145913 I | rookcmd: flag values: --cluster-id=2420d1bb-e0c9-4001-923f-c1fb54036634, --cluster-name=rook-ceph, --data-device-filter=, --data-device-path-filter=, --data-devices=[{"id":"/dev/disk/by-id/dm-name-cryptdata0","storeConfig":{"osdsPerDevice":1}}], --encrypted-device=false, --force-format=false, --help=false, --location=, --log-level=DEBUG, --metadata-device=, --node-name=lab05, --osd-crush-device-class=, --osd-crush-initial-weight=, --osd-database-size=0, --osd-store-type=bluestore, --osd-wal-size=576, --osds-per-device=1, --pvc-backed-osd=false, --replace-osd=-1
2025-05-18 17:46:03.145916 I | ceph-spec: parsing mon endpoints: b=10.43.103.181:6789,c=10.43.80.82:6789,a=10.43.52.199:6789
2025-05-18 17:46:03.149475 I | op-osd: CRUSH location=root=default host=lab05
2025-05-18 17:46:03.149487 I | cephcmd: crush location of osd: root=default host=lab05
2025-05-18 17:46:03.150729 D | cephclient: No ceph configuration override to merge as "rook-config-override" configmap is empty
2025-05-18 17:46:03.150744 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2025-05-18 17:46:03.150844 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2025-05-18 17:46:03.150935 D | cephclient: config file @ /etc/ceph/ceph.conf:
[global]
fsid = 228cac23-0ec0-47c2-a16f-01e990a21961
mon initial members = b c a
mon host = [v2:10.43.103.181:3300,v1:10.43.103.181:6789],[v2:10.43.80.82:3300,v1:10.43.80.82:6789],[v2:10.43.52.199:3300,v1:10.43.52.199:6789]
[client.admin]
keyring = /var/lib/rook/rook-ceph/client.admin.keyring
2025-05-18 17:46:03.150941 D | exec: Running command: dmsetup version
2025-05-18 17:46:03.152697 I | cephosd: Library version: 1.02.202-RHEL9 (2024-11-04)
Driver version: 4.47.0
2025-05-18 17:46:03.157955 I | cephosd: discovering hardware
2025-05-18 17:46:03.157970 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
2025-05-18 17:46:03.163735 D | exec: Running command: lsblk /dev/loop0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.165396 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.165410 W | inventory: skipping device "loop0". exit status 32
2025-05-18 17:46:03.165419 D | exec: Running command: lsblk /dev/loop1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.166995 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.167008 W | inventory: skipping device "loop1". exit status 32
2025-05-18 17:46:03.167016 D | exec: Running command: lsblk /dev/loop2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.168684 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.168699 W | inventory: skipping device "loop2". exit status 32
2025-05-18 17:46:03.168713 D | exec: Running command: lsblk /dev/loop3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.170353 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.170366 W | inventory: skipping device "loop3". exit status 32
2025-05-18 17:46:03.170373 D | exec: Running command: lsblk /dev/loop4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.172035 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.172048 W | inventory: skipping device "loop4". exit status 32
2025-05-18 17:46:03.172056 D | exec: Running command: lsblk /dev/loop5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.173684 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.173695 W | inventory: skipping device "loop5". exit status 32
2025-05-18 17:46:03.173702 D | exec: Running command: lsblk /dev/loop6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.175272 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.175283 W | inventory: skipping device "loop6". exit status 32
2025-05-18 17:46:03.175289 D | exec: Running command: lsblk /dev/loop7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.176881 E | sys: failed to execute lsblk. output: .
2025-05-18 17:46:03.176892 W | inventory: skipping device "loop7". exit status 32
2025-05-18 17:46:03.176899 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.181032 D | sys: lsblk output: "SIZE=\"800166076416\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sda\" KNAME=\"/dev/sda\" MOUNTPOINT=\"\" FSTYPE=\"crypto_LUKS\""
2025-05-18 17:46:03.181081 D | exec: Running command: sgdisk --print /dev/sda
2025-05-18 17:46:03.183773 W | inventory: uuid not found for device /dev/sda. output=Creating new GPT entries in memory.
Disk /dev/sda: 1562824368 sectors, 745.2 GiB
Model: INTEL SSDSC2BX80
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 98D77A2E-E5BA-47FE-A245-4C0D370ABD0B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1562824334
Partitions will be aligned on 2048-sector boundaries
Total free space is 1562824301 sectors (745.2 GiB)
Number Start (sector) End (sector) Size Code Name
2025-05-18 17:46:03.183790 D | exec: Running command: udevadm info --query=property /dev/sda
2025-05-18 17:46:03.189323 D | sys: udevadm info output: "DEVPATH=/devices/pci0000:00/0000:00:17.0/ata1/host0/target0:0:0/0:0:0:0/block/sda\nDEVNAME=/dev/sda\nDEVTYPE=disk\nDISKSEQ=2\nMAJOR=8\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=17942897\nID_ATA=1\nID_TYPE=disk\nID_BUS=ata\nID_MODEL=INTEL_SSDSC2BX800G4R\nID_MODEL_ENC=INTEL\\x20SSDSC2BX800G4R\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\nID_REVISION=G201DL2D\nID_SERIAL=INTEL_SSDSC2BX800G4R_BTHC705405Q0800NGN\nID_SERIAL_SHORT=BTHC705405Q0800NGN\nID_ATA_WRITE_CACHE=1\nID_ATA_WRITE_CACHE_ENABLED=1\nID_ATA_FEATURE_SET_PM=1\nID_ATA_FEATURE_SET_PM_ENABLED=1\nID_ATA_FEATURE_SET_SMART=1\nID_ATA_FEATURE_SET_SMART_ENABLED=1\nID_ATA_DOWNLOAD_MICROCODE=1\nID_ATA_SATA=1\nID_ATA_SATA_SIGNAL_RATE_GEN2=1\nID_ATA_SATA_SIGNAL_RATE_GEN1=1\nID_ATA_ROTATION_RATE_RPM=0\nID_WWN=0x55cd2e414da2aa0e\nID_WWN_WITH_EXTENSION=0x55cd2e414da2aa0e\nID_PATH=pci-0000:00:17.0-ata-1.0\nID_PATH_TAG=pci-0000_00_17_0-ata-1_0\nID_PATH_ATA_COMPAT=pci-0000:00:17.0-ata-1\nID_FS_VERSION=2\nID_FS_UUID=6523c424-9615-4217-ada3-1ce912013e2d\nID_FS_UUID_ENC=6523c424-9615-4217-ada3-1ce912013e2d\nID_FS_TYPE=crypto_LUKS\nID_FS_USAGE=crypto\nDEVLINKS=/dev/disk/by-id/ata-INTEL_SSDSC2BX800G4R_BTHC705405Q0800NGN /dev/disk/by-path/pci-0000:00:17.0-ata-1 /dev/disk/by-diskseq/2 /dev/disk/by-uuid/6523c424-9615-4217-ada3-1ce912013e2d /dev/disk/by-id/wwn-0x55cd2e414da2aa0e /dev/disk/by-path/pci-0000:00:17.0-ata-1.0\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.189349 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/sda
2025-05-18 17:46:03.191378 I | inventory: skipping device "sda" because it has child, considering the child instead.
2025-05-18 17:46:03.191392 D | exec: Running command: lsblk /dev/nbd0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.195548 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd0\" KNAME=\"/dev/nbd0\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.195585 D | exec: Running command: sgdisk --print /dev/nbd0
2025-05-18 17:46:03.198722 D | exec: Running command: udevadm info --query=property /dev/nbd0
2025-05-18 17:46:03.204542 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd0\nDEVNAME=/dev/nbd0\nDEVTYPE=disk\nDISKSEQ=15\nMAJOR=43\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508557374\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/15\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.204563 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd0
2025-05-18 17:46:03.206301 D | exec: Running command: lsblk /dev/nbd1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.210701 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd1\" KNAME=\"/dev/nbd1\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.210742 D | exec: Running command: sgdisk --print /dev/nbd1
2025-05-18 17:46:03.213568 D | exec: Running command: udevadm info --query=property /dev/nbd1
2025-05-18 17:46:03.218842 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd1\nDEVNAME=/dev/nbd1\nDEVTYPE=disk\nDISKSEQ=16\nMAJOR=43\nMINOR=32\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508557463\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/16\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.218864 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd1
2025-05-18 17:46:03.220714 D | exec: Running command: lsblk /dev/nbd2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.224648 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd2\" KNAME=\"/dev/nbd2\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.224685 D | exec: Running command: sgdisk --print /dev/nbd2
2025-05-18 17:46:03.227725 D | exec: Running command: udevadm info --query=property /dev/nbd2
2025-05-18 17:46:03.232627 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd2\nDEVNAME=/dev/nbd2\nDEVTYPE=disk\nDISKSEQ=17\nMAJOR=43\nMINOR=64\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508559163\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/17\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.232646 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd2
2025-05-18 17:46:03.234312 D | exec: Running command: lsblk /dev/nbd3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.237929 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd3\" KNAME=\"/dev/nbd3\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.237970 D | exec: Running command: sgdisk --print /dev/nbd3
2025-05-18 17:46:03.250700 D | exec: Running command: udevadm info --query=property /dev/nbd3
2025-05-18 17:46:03.256048 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd3\nDEVNAME=/dev/nbd3\nDEVTYPE=disk\nDISKSEQ=18\nMAJOR=43\nMINOR=96\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508558864\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/18\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.256103 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd3
2025-05-18 17:46:03.257994 D | exec: Running command: lsblk /dev/nbd4 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.262248 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd4\" KNAME=\"/dev/nbd4\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.262288 D | exec: Running command: sgdisk --print /dev/nbd4
2025-05-18 17:46:03.265376 D | exec: Running command: udevadm info --query=property /dev/nbd4
2025-05-18 17:46:03.270910 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd4\nDEVNAME=/dev/nbd4\nDEVTYPE=disk\nDISKSEQ=19\nMAJOR=43\nMINOR=128\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508562237\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/19\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.270931 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd4
2025-05-18 17:46:03.272736 D | exec: Running command: lsblk /dev/nbd5 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.278933 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd5\" KNAME=\"/dev/nbd5\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.279006 D | exec: Running command: sgdisk --print /dev/nbd5
2025-05-18 17:46:03.281707 D | exec: Running command: udevadm info --query=property /dev/nbd5
2025-05-18 17:46:03.286903 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd5\nDEVNAME=/dev/nbd5\nDEVTYPE=disk\nDISKSEQ=20\nMAJOR=43\nMINOR=160\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508560997\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/20\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.286923 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd5
2025-05-18 17:46:03.288625 D | exec: Running command: lsblk /dev/nbd6 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.292698 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd6\" KNAME=\"/dev/nbd6\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.292732 D | exec: Running command: sgdisk --print /dev/nbd6
2025-05-18 17:46:03.295430 D | exec: Running command: udevadm info --query=property /dev/nbd6
2025-05-18 17:46:03.300615 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd6\nDEVNAME=/dev/nbd6\nDEVTYPE=disk\nDISKSEQ=21\nMAJOR=43\nMINOR=192\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508560047\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/21\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.300635 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd6
2025-05-18 17:46:03.302400 D | exec: Running command: lsblk /dev/nbd7 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.306559 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd7\" KNAME=\"/dev/nbd7\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.306597 D | exec: Running command: sgdisk --print /dev/nbd7
2025-05-18 17:46:03.309332 D | exec: Running command: udevadm info --query=property /dev/nbd7
2025-05-18 17:46:03.314497 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd7\nDEVNAME=/dev/nbd7\nDEVTYPE=disk\nDISKSEQ=22\nMAJOR=43\nMINOR=224\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508564234\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/22\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.314518 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd7
2025-05-18 17:46:03.316284 D | exec: Running command: lsblk /dev/dm-0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.322483 D | sys: lsblk output: "SIZE=\"254968594432\" ROTA=\"0\" RO=\"0\" TYPE=\"crypt\" PKNAME=\"\" NAME=\"/dev/mapper/cryptroot\" KNAME=\"/dev/dm-0\" MOUNTPOINT=\"\" FSTYPE=\"LVM2_member\""
2025-05-18 17:46:03.322506 D | exec: Running command: udevadm info --query=property /dev/dm-0
2025-05-18 17:46:03.328122 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/dm-0\nDEVNAME=/dev/dm-0\nDEVTYPE=disk\nDISKSEQ=3\nMAJOR=254\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=11943232\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES=1\nDM_UDEV_RULES_VSN=2\nDM_NAME=cryptroot\nDM_UUID=CRYPT-LUKS2-c5d7e8c9a4434c168fc844cab2369af0-cryptroot\nDM_SUSPENDED=0\nID_FS_UUID=8LQxm1-uTQc-ZnfW-eNtg-4Ewt-QU8Y-s8RSfl\nID_FS_UUID_ENC=8LQxm1-uTQc-ZnfW-eNtg-4Ewt-QU8Y-s8RSfl\nID_FS_VERSION=LVM2 001\nID_FS_TYPE=LVM2_member\nID_FS_USAGE=raid\nSYSTEMD_READY=1\nDEVLINKS=/dev/mapper/cryptroot /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-c5d7e8c9a4434c168fc844cab2369af0-cryptroot /dev/disk/by-id/dm-name-cryptroot /dev/disk/by-id/lvm-pv-uuid-8LQxm1-uTQc-ZnfW-eNtg-4Ewt-QU8Y-s8RSfl\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.328288 D | exec: Running command: lsblk /dev/dm-1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.332776 D | sys: lsblk output: "SIZE=\"34359738368\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/vgroot-root\" KNAME=\"/dev/dm-1\" MOUNTPOINT=\"/rootfs\" FSTYPE=\"ext4\""
2025-05-18 17:46:03.332837 D | exec: Running command: udevadm info --query=property /dev/dm-1
2025-05-18 17:46:03.338172 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/dm-1\nDEVNAME=/dev/dm-1\nDEVTYPE=disk\nDISKSEQ=4\nMAJOR=254\nMINOR=1\nSUBSYSTEM=block\nUSEC_INITIALIZED=12658099\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES=1\nDM_UDEV_RULES_VSN=2\nDM_NAME=vgroot-root\nDM_UUID=LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQQPSBPMGVRHfsZNuqxazGRcF0tarySee0\nDM_SUSPENDED=0\nDM_VG_NAME=vgroot\nDM_LV_NAME=root\nID_FS_UUID=25d9d673-f7bf-4c9e-bf80-d5b8884caacb\nID_FS_UUID_ENC=25d9d673-f7bf-4c9e-bf80-d5b8884caacb\nID_FS_VERSION=1.0\nID_FS_TYPE=ext4\nID_FS_USAGE=filesystem\nSYSTEMD_READY=1\nDEVLINKS=/dev/vgroot/root /dev/disk/by-uuid/25d9d673-f7bf-4c9e-bf80-d5b8884caacb /dev/disk/by-id/dm-name-vgroot-root /dev/mapper/vgroot-root /dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQQPSBPMGVRHfsZNuqxazGRcF0tarySee0\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.338201 D | exec: Running command: lsblk /dev/dm-2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.342388 D | sys: lsblk output: "SIZE=\"800149299200\" ROTA=\"0\" RO=\"0\" TYPE=\"crypt\" PKNAME=\"\" NAME=\"/dev/mapper/cryptdata0\" KNAME=\"/dev/dm-2\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.342409 D | exec: Running command: udevadm info --query=property /dev/dm-2
2025-05-18 17:46:03.347605 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/dm-2\nDEVNAME=/dev/dm-2\nDEVTYPE=disk\nDISKSEQ=5\nMAJOR=254\nMINOR=2\nSUBSYSTEM=block\nUSEC_INITIALIZED=16556807\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES=1\nDM_UDEV_RULES_VSN=2\nDM_NAME=cryptdata0\nDM_UUID=CRYPT-LUKS2-6523c42496154217ada31ce912013e2d-cryptdata0\nDM_SUSPENDED=0\nSYSTEMD_READY=0\nDEVLINKS=/dev/mapper/cryptdata0 /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-6523c42496154217ada31ce912013e2d-cryptdata0 /dev/disk/by-id/dm-name-cryptdata0\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.347627 D | exec: Running command: lsblk /dev/dm-3 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.351710 D | sys: lsblk output: "SIZE=\"8589934592\" ROTA=\"0\" RO=\"0\" TYPE=\"lvm\" PKNAME=\"\" NAME=\"/dev/mapper/vgroot-home\" KNAME=\"/dev/dm-3\" MOUNTPOINT=\"/rootfs/home\" FSTYPE=\"ext4\""
2025-05-18 17:46:03.351730 D | exec: Running command: udevadm info --query=property /dev/dm-3
2025-05-18 17:46:03.356968 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/dm-3\nDEVNAME=/dev/dm-3\nDEVTYPE=disk\nDISKSEQ=14\nMAJOR=254\nMINOR=3\nSUBSYSTEM=block\nUSEC_INITIALIZED=18106881\nDM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1\nDM_UDEV_PRIMARY_SOURCE_FLAG=1\nDM_UDEV_RULES=1\nDM_UDEV_RULES_VSN=2\nDM_NAME=vgroot-home\nDM_UUID=LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQs1F2H7PIjBzm6TOuMAAKBenDO5cd9h4c\nDM_SUSPENDED=0\nDM_VG_NAME=vgroot\nDM_LV_NAME=home\nID_FS_UUID=6f49edee-9ef4-41f0-a87f-4cc62304bbaf\nID_FS_UUID_ENC=6f49edee-9ef4-41f0-a87f-4cc62304bbaf\nID_FS_VERSION=1.0\nID_FS_TYPE=ext4\nID_FS_USAGE=filesystem\nSYSTEMD_READY=1\nDEVLINKS=/dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQs1F2H7PIjBzm6TOuMAAKBenDO5cd9h4c /dev/disk/by-id/dm-name-vgroot-home /dev/disk/by-uuid/6f49edee-9ef4-41f0-a87f-4cc62304bbaf /dev/mapper/vgroot-home /dev/vgroot/home\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.356993 D | exec: Running command: lsblk /dev/nvme0n1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.361286 D | sys: lsblk output: "SIZE=\"256060514304\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nvme0n1\" KNAME=\"/dev/nvme0n1\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.361328 D | exec: Running command: sgdisk --print /dev/nvme0n1
2025-05-18 17:46:03.366477 D | exec: Running command: udevadm info --query=property /dev/nvme0n1
2025-05-18 17:46:03.373048 D | sys: udevadm info output: "DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:02:00.0/nvme/nvme0/nvme0n1\nDEVNAME=/dev/nvme0n1\nDEVTYPE=disk\nDISKSEQ=1\nMAJOR=259\nMINOR=0\nSUBSYSTEM=block\nUSEC_INITIALIZED=17952925\nID_SERIAL_SHORT=89NPD108PQEN\nID_WWN=eui.01000000000000008ce38e040021ad4b\nID_MODEL=KBG40ZNS256G NVMe TOSHIBA 256GB\nID_REVISION=10410104\nID_NSID=1\nID_SERIAL=KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1\nID_PATH=pci-0000:02:00.0-nvme-1\nID_PATH_TAG=pci-0000_02_00_0-nvme-1\nID_PART_TABLE_UUID=b564eab0-eae7-4235-9073-48f5c74c683d\nID_PART_TABLE_TYPE=gpt\nDEVLINKS=/dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN /dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b /dev/disk/by-diskseq/1 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1 /dev/disk/by-path/pci-0000:02:00.0-nvme-1\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.373072 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nvme0n1
2025-05-18 17:46:03.375544 I | inventory: skipping device "nvme0n1" because it has child, considering the child instead.
2025-05-18 17:46:03.375557 D | exec: Running command: lsblk /dev/nvme0n1p1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.380102 D | sys: lsblk output: "SIZE=\"1073741824\" ROTA=\"0\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/nvme0n1\" NAME=\"/dev/nvme0n1p1\" KNAME=\"/dev/nvme0n1p1\" MOUNTPOINT=\"/rootfs/efi\" FSTYPE=\"vfat\""
2025-05-18 17:46:03.380126 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p1
2025-05-18 17:46:03.385365 D | sys: udevadm info output: "DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:02:00.0/nvme/nvme0/nvme0n1/nvme0n1p1\nDEVNAME=/dev/nvme0n1p1\nDEVTYPE=partition\nDISKSEQ=1\nPARTN=1\nPARTNAME=EFI system partition\nMAJOR=259\nMINOR=1\nSUBSYSTEM=block\nUSEC_INITIALIZED=17963532\nID_SERIAL_SHORT=89NPD108PQEN\nID_WWN=eui.01000000000000008ce38e040021ad4b\nID_MODEL=KBG40ZNS256G NVMe TOSHIBA 256GB\nID_REVISION=10410104\nID_NSID=1\nID_SERIAL=KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1\nID_PATH=pci-0000:02:00.0-nvme-1\nID_PATH_TAG=pci-0000_02_00_0-nvme-1\nID_PART_TABLE_UUID=b564eab0-eae7-4235-9073-48f5c74c683d\nID_PART_TABLE_TYPE=gpt\nID_FS_UUID=4AAA-4FB4\nID_FS_UUID_ENC=4AAA-4FB4\nID_FS_VERSION=FAT32\nID_FS_TYPE=vfat\nID_FS_USAGE=filesystem\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_NAME=EFI\\x20system\\x20partition\nID_PART_ENTRY_UUID=01813989-e41f-4d45-86d7-7e25abde893f\nID_PART_ENTRY_TYPE=c12a7328-f81f-11d2-ba4b-00a0c93ec93b\nID_PART_ENTRY_NUMBER=1\nID_PART_ENTRY_OFFSET=2048\nID_PART_ENTRY_SIZE=2097152\nID_PART_ENTRY_DISK=259:0\nDEVLINKS=/dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part1 /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part1 /dev/disk/by-partlabel/EFI\\x20system\\x20partition /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part1 /dev/disk/by-uuid/4AAA-4FB4 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part1 /dev/disk/by-partuuid/01813989-e41f-4d45-86d7-7e25abde893f\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.385393 D | exec: Running command: lsblk /dev/nvme0n1p2 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.389725 D | sys: lsblk output: "SIZE=\"254985371648\" ROTA=\"0\" RO=\"0\" TYPE=\"part\" PKNAME=\"/dev/nvme0n1\" NAME=\"/dev/nvme0n1p2\" KNAME=\"/dev/nvme0n1p2\" MOUNTPOINT=\"\" FSTYPE=\"crypto_LUKS\""
2025-05-18 17:46:03.389744 D | exec: Running command: udevadm info --query=property /dev/nvme0n1p2
2025-05-18 17:46:03.395031 D | sys: udevadm info output: "DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:02:00.0/nvme/nvme0/nvme0n1/nvme0n1p2\nDEVNAME=/dev/nvme0n1p2\nDEVTYPE=partition\nDISKSEQ=1\nPARTN=2\nPARTNAME=Linux filesystem\nMAJOR=259\nMINOR=2\nSUBSYSTEM=block\nUSEC_INITIALIZED=17956532\nID_SERIAL_SHORT=89NPD108PQEN\nID_WWN=eui.01000000000000008ce38e040021ad4b\nID_MODEL=KBG40ZNS256G NVMe TOSHIBA 256GB\nID_REVISION=10410104\nID_NSID=1\nID_SERIAL=KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1\nID_PATH=pci-0000:02:00.0-nvme-1\nID_PATH_TAG=pci-0000_02_00_0-nvme-1\nID_PART_TABLE_UUID=b564eab0-eae7-4235-9073-48f5c74c683d\nID_PART_TABLE_TYPE=gpt\nID_FS_VERSION=2\nID_FS_UUID=c5d7e8c9-a443-4c16-8fc8-44cab2369af0\nID_FS_UUID_ENC=c5d7e8c9-a443-4c16-8fc8-44cab2369af0\nID_FS_TYPE=crypto_LUKS\nID_FS_USAGE=crypto\nID_PART_ENTRY_SCHEME=gpt\nID_PART_ENTRY_NAME=Linux\\x20filesystem\nID_PART_ENTRY_UUID=549319a0-62cb-4db4-84c3-3d414c411f8c\nID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4\nID_PART_ENTRY_NUMBER=2\nID_PART_ENTRY_OFFSET=2099200\nID_PART_ENTRY_SIZE=498018304\nID_PART_ENTRY_DISK=259:0\nDEVLINKS=/dev/disk/by-partuuid/549319a0-62cb-4db4-84c3-3d414c411f8c /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part2 /dev/disk/by-partlabel/Linux\\x20filesystem /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part2 /dev/disk/by-uuid/c5d7e8c9-a443-4c16-8fc8-44cab2369af0 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part2 /dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part2\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.395115 D | exec: Running command: lsblk /dev/nbd8 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.400278 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd8\" KNAME=\"/dev/nbd8\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.400418 D | exec: Running command: sgdisk --print /dev/nbd8
2025-05-18 17:46:03.404442 D | exec: Running command: udevadm info --query=property /dev/nbd8
2025-05-18 17:46:03.409970 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd8\nDEVNAME=/dev/nbd8\nDEVTYPE=disk\nDISKSEQ=23\nMAJOR=43\nMINOR=256\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508565250\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/23\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.410033 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd8
2025-05-18 17:46:03.411983 D | exec: Running command: lsblk /dev/nbd9 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.416333 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd9\" KNAME=\"/dev/nbd9\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.416420 D | exec: Running command: sgdisk --print /dev/nbd9
2025-05-18 17:46:03.419806 D | exec: Running command: udevadm info --query=property /dev/nbd9
2025-05-18 17:46:03.425828 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd9\nDEVNAME=/dev/nbd9\nDEVTYPE=disk\nDISKSEQ=24\nMAJOR=43\nMINOR=288\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508561472\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/24\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.425887 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd9
2025-05-18 17:46:03.427844 D | exec: Running command: lsblk /dev/nbd10 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.434467 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd10\" KNAME=\"/dev/nbd10\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.434507 D | exec: Running command: sgdisk --print /dev/nbd10
2025-05-18 17:46:03.450939 D | exec: Running command: udevadm info --query=property /dev/nbd10
2025-05-18 17:46:03.456269 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd10\nDEVNAME=/dev/nbd10\nDEVTYPE=disk\nDISKSEQ=25\nMAJOR=43\nMINOR=320\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508562462\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/25\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.456428 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd10
2025-05-18 17:46:03.458542 D | exec: Running command: lsblk /dev/nbd11 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.462736 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd11\" KNAME=\"/dev/nbd11\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.462779 D | exec: Running command: sgdisk --print /dev/nbd11
2025-05-18 17:46:03.484305 D | exec: Running command: udevadm info --query=property /dev/nbd11
2025-05-18 17:46:03.491859 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd11\nDEVNAME=/dev/nbd11\nDEVTYPE=disk\nDISKSEQ=26\nMAJOR=43\nMINOR=352\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508562568\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/26\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.491891 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd11
2025-05-18 17:46:03.493864 D | exec: Running command: lsblk /dev/nbd12 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.499221 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd12\" KNAME=\"/dev/nbd12\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.499792 D | exec: Running command: sgdisk --print /dev/nbd12
2025-05-18 17:46:03.504128 D | exec: Running command: udevadm info --query=property /dev/nbd12
2025-05-18 17:46:03.509493 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd12\nDEVNAME=/dev/nbd12\nDEVTYPE=disk\nDISKSEQ=27\nMAJOR=43\nMINOR=384\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508564294\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/27\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.509628 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd12
2025-05-18 17:46:03.511672 D | exec: Running command: lsblk /dev/nbd13 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.516221 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd13\" KNAME=\"/dev/nbd13\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.516556 D | exec: Running command: sgdisk --print /dev/nbd13
2025-05-18 17:46:03.523694 D | exec: Running command: udevadm info --query=property /dev/nbd13
2025-05-18 17:46:03.534369 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd13\nDEVNAME=/dev/nbd13\nDEVTYPE=disk\nDISKSEQ=28\nMAJOR=43\nMINOR=416\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508566265\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/28\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.534540 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd13
2025-05-18 17:46:03.536933 D | exec: Running command: lsblk /dev/nbd14 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.541339 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd14\" KNAME=\"/dev/nbd14\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.541471 D | exec: Running command: sgdisk --print /dev/nbd14
2025-05-18 17:46:03.563020 D | exec: Running command: udevadm info --query=property /dev/nbd14
2025-05-18 17:46:03.570724 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd14\nDEVNAME=/dev/nbd14\nDEVTYPE=disk\nDISKSEQ=29\nMAJOR=43\nMINOR=448\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508566983\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/29\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.570743 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd14
2025-05-18 17:46:03.572484 D | exec: Running command: lsblk /dev/nbd15 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.576399 D | sys: lsblk output: "SIZE=\"0\" ROTA=\"0\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/nbd15\" KNAME=\"/dev/nbd15\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.576538 D | exec: Running command: sgdisk --print /dev/nbd15
2025-05-18 17:46:03.580963 D | exec: Running command: udevadm info --query=property /dev/nbd15
2025-05-18 17:46:03.586233 D | sys: udevadm info output: "DEVPATH=/devices/virtual/block/nbd15\nDEVNAME=/dev/nbd15\nDEVTYPE=disk\nDISKSEQ=30\nMAJOR=43\nMINOR=480\nSUBSYSTEM=block\nUSEC_INITIALIZED=2508569271\nSYSTEMD_READY=0\nDEVLINKS=/dev/disk/by-diskseq/30\nTAGS=:systemd:\nCURRENT_TAGS=:systemd:"
2025-05-18 17:46:03.586382 D | exec: Running command: lsblk --noheadings --path --list --output NAME /dev/nbd15
2025-05-18 17:46:03.588374 D | inventory: discovered disks are:
2025-05-18 17:46:03.588501 D | inventory: &{Name:nbd0 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/15 Size:0 UUID:0196a87d-2f4f-4330-a21d-6745560a40aa Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd0 KernelName:nbd0 Encrypted:false}
2025-05-18 17:46:03.588607 D | inventory: &{Name:nbd1 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/16 Size:0 UUID:6a474e4d-7540-4d51-bd69-0d951d28d0b9 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd1 KernelName:nbd1 Encrypted:false}
2025-05-18 17:46:03.588680 D | inventory: &{Name:nbd2 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/17 Size:0 UUID:21b4aca1-9e37-4e73-8df0-451ab228052b Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd2 KernelName:nbd2 Encrypted:false}
2025-05-18 17:46:03.588749 D | inventory: &{Name:nbd3 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/18 Size:0 UUID:aaee7f93-fc9b-4594-abd4-7a342efc41f3 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd3 KernelName:nbd3 Encrypted:false}
2025-05-18 17:46:03.588815 D | inventory: &{Name:nbd4 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/19 Size:0 UUID:083a58c9-abc0-4fbe-b29a-218136a425ba Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd4 KernelName:nbd4 Encrypted:false}
2025-05-18 17:46:03.588884 D | inventory: &{Name:nbd5 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/20 Size:0 UUID:6478bfd6-a665-452e-a3d4-354f90452026 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd5 KernelName:nbd5 Encrypted:false}
2025-05-18 17:46:03.588953 D | inventory: &{Name:nbd6 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/21 Size:0 UUID:161132c8-521f-4e24-be14-35be5b3dc9bd Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd6 KernelName:nbd6 Encrypted:false}
2025-05-18 17:46:03.589021 D | inventory: &{Name:nbd7 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/22 Size:0 UUID:d7c13d8d-fc98-4f0e-b167-a58586ce9ab0 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd7 KernelName:nbd7 Encrypted:false}
2025-05-18 17:46:03.589085 D | inventory: &{Name:dm-0 Parent: HasChildren:false DevLinks:/dev/mapper/cryptroot /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-c5d7e8c9a4434c168fc844cab2369af0-cryptroot /dev/disk/by-id/dm-name-cryptroot /dev/disk/by-id/lvm-pv-uuid-8LQxm1-uTQc-ZnfW-eNtg-4Ewt-QU8Y-s8RSfl Size:254968594432 UUID: Serial: Type:crypt Rotational:false Readonly:false Partitions:[] Filesystem:LVM2_member Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/cryptroot KernelName:dm-0 Encrypted:false}
2025-05-18 17:46:03.589168 D | inventory: &{Name:dm-1 Parent: HasChildren:false DevLinks:/dev/vgroot/root /dev/disk/by-uuid/25d9d673-f7bf-4c9e-bf80-d5b8884caacb /dev/disk/by-id/dm-name-vgroot-root /dev/mapper/vgroot-root /dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQQPSBPMGVRHfsZNuqxazGRcF0tarySee0 Size:34359738368 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:rootfs Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/vgroot-root KernelName:dm-1 Encrypted:false}
2025-05-18 17:46:03.589240 D | inventory: &{Name:dm-2 Parent: HasChildren:false DevLinks:/dev/mapper/cryptdata0 /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-6523c42496154217ada31ce912013e2d-cryptdata0 /dev/disk/by-id/dm-name-cryptdata0 Size:800149299200 UUID: Serial: Type:crypt Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/cryptdata0 KernelName:dm-2 Encrypted:false}
2025-05-18 17:46:03.589308 D | inventory: &{Name:dm-3 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQs1F2H7PIjBzm6TOuMAAKBenDO5cd9h4c /dev/disk/by-id/dm-name-vgroot-home /dev/disk/by-uuid/6f49edee-9ef4-41f0-a87f-4cc62304bbaf /dev/mapper/vgroot-home /dev/vgroot/home Size:8589934592 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:home Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/vgroot-home KernelName:dm-3 Encrypted:false}
2025-05-18 17:46:03.589375 D | inventory: &{Name:nvme0n1p1 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part1 /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part1 /dev/disk/by-partlabel/EFI\x20system\x20partition /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part1 /dev/disk/by-uuid/4AAA-4FB4 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part1 /dev/disk/by-partuuid/01813989-e41f-4d45-86d7-7e25abde893f Size:1073741824 UUID: Serial:KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:vfat Mountpoint:efi Vendor: Model:KBG40ZNS256G NVMe TOSHIBA 256GB WWN:eui.01000000000000008ce38e040021ad4b WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p1 KernelName:nvme0n1p1 Encrypted:false}
2025-05-18 17:46:03.589454 D | inventory: &{Name:nvme0n1p2 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-partuuid/549319a0-62cb-4db4-84c3-3d414c411f8c /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part2 /dev/disk/by-partlabel/Linux\x20filesystem /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part2 /dev/disk/by-uuid/c5d7e8c9-a443-4c16-8fc8-44cab2369af0 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part2 /dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part2 Size:254985371648 UUID: Serial:KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:crypto_LUKS Mountpoint: Vendor: Model:KBG40ZNS256G NVMe TOSHIBA 256GB WWN:eui.01000000000000008ce38e040021ad4b WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p2 KernelName:nvme0n1p2 Encrypted:false}
2025-05-18 17:46:03.589535 D | inventory: &{Name:nbd8 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/23 Size:0 UUID:11c10883-3e3c-42f8-94a9-15a0cca5604c Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd8 KernelName:nbd8 Encrypted:false}
2025-05-18 17:46:03.589614 D | inventory: &{Name:nbd9 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/24 Size:0 UUID:413aab43-a5fb-43ed-8c81-be389c7b707d Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd9 KernelName:nbd9 Encrypted:false}
2025-05-18 17:46:03.589680 D | inventory: &{Name:nbd10 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/25 Size:0 UUID:3209e73d-417b-453f-a0b9-7dac2c3cfc6d Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd10 KernelName:nbd10 Encrypted:false}
2025-05-18 17:46:03.589747 D | inventory: &{Name:nbd11 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/26 Size:0 UUID:4b04474c-fc00-4cf5-a616-7bb74424a1a5 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd11 KernelName:nbd11 Encrypted:false}
2025-05-18 17:46:03.589812 D | inventory: &{Name:nbd12 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/27 Size:0 UUID:c773ea59-4309-4a96-b37d-88ea88f566a1 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd12 KernelName:nbd12 Encrypted:false}
2025-05-18 17:46:03.589880 D | inventory: &{Name:nbd13 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/28 Size:0 UUID:16edbd81-6c24-4712-8b44-61d24897f234 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd13 KernelName:nbd13 Encrypted:false}
2025-05-18 17:46:03.589944 D | inventory: &{Name:nbd14 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/29 Size:0 UUID:10bfe7f7-cb3e-4a91-ad1c-1c0704bb65bf Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd14 KernelName:nbd14 Encrypted:false}
2025-05-18 17:46:03.590011 D | inventory: &{Name:nbd15 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/30 Size:0 UUID:1b23b721-1d95-4889-b933-353d6f0b59a4 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd15 KernelName:nbd15 Encrypted:false}
2025-05-18 17:46:03.590068 I | cephosd: creating and starting the osds
2025-05-18 17:46:03.590150 D | cephosd: desiredDevices are [{Name:/dev/disk/by-id/dm-name-cryptdata0 OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: InitialWeight: IsFilter:false IsDevicePathFilter:false}]
2025-05-18 17:46:03.590212 D | cephosd: context.Devices are:
2025-05-18 17:46:03.590278 D | cephosd: &{Name:nbd0 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/15 Size:0 UUID:0196a87d-2f4f-4330-a21d-6745560a40aa Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd0 KernelName:nbd0 Encrypted:false}
2025-05-18 17:46:03.590346 D | cephosd: &{Name:nbd1 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/16 Size:0 UUID:6a474e4d-7540-4d51-bd69-0d951d28d0b9 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd1 KernelName:nbd1 Encrypted:false}
2025-05-18 17:46:03.590520 D | cephosd: &{Name:nbd2 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/17 Size:0 UUID:21b4aca1-9e37-4e73-8df0-451ab228052b Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd2 KernelName:nbd2 Encrypted:false}
2025-05-18 17:46:03.590532 D | cephosd: &{Name:nbd3 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/18 Size:0 UUID:aaee7f93-fc9b-4594-abd4-7a342efc41f3 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd3 KernelName:nbd3 Encrypted:false}
2025-05-18 17:46:03.590550 D | cephosd: &{Name:nbd4 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/19 Size:0 UUID:083a58c9-abc0-4fbe-b29a-218136a425ba Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd4 KernelName:nbd4 Encrypted:false}
2025-05-18 17:46:03.590577 D | cephosd: &{Name:nbd5 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/20 Size:0 UUID:6478bfd6-a665-452e-a3d4-354f90452026 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd5 KernelName:nbd5 Encrypted:false}
2025-05-18 17:46:03.590597 D | cephosd: &{Name:nbd6 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/21 Size:0 UUID:161132c8-521f-4e24-be14-35be5b3dc9bd Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd6 KernelName:nbd6 Encrypted:false}
2025-05-18 17:46:03.590626 D | cephosd: &{Name:nbd7 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/22 Size:0 UUID:d7c13d8d-fc98-4f0e-b167-a58586ce9ab0 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd7 KernelName:nbd7 Encrypted:false}
2025-05-18 17:46:03.590654 D | cephosd: &{Name:dm-0 Parent: HasChildren:false DevLinks:/dev/mapper/cryptroot /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-c5d7e8c9a4434c168fc844cab2369af0-cryptroot /dev/disk/by-id/dm-name-cryptroot /dev/disk/by-id/lvm-pv-uuid-8LQxm1-uTQc-ZnfW-eNtg-4Ewt-QU8Y-s8RSfl Size:254968594432 UUID: Serial: Type:crypt Rotational:false Readonly:false Partitions:[] Filesystem:LVM2_member Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/cryptroot KernelName:dm-0 Encrypted:false}
2025-05-18 17:46:03.590677 D | cephosd: &{Name:dm-1 Parent: HasChildren:false DevLinks:/dev/vgroot/root /dev/disk/by-uuid/25d9d673-f7bf-4c9e-bf80-d5b8884caacb /dev/disk/by-id/dm-name-vgroot-root /dev/mapper/vgroot-root /dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQQPSBPMGVRHfsZNuqxazGRcF0tarySee0 Size:34359738368 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:rootfs Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/vgroot-root KernelName:dm-1 Encrypted:false}
2025-05-18 17:46:03.590699 D | cephosd: &{Name:dm-2 Parent: HasChildren:false DevLinks:/dev/mapper/cryptdata0 /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-6523c42496154217ada31ce912013e2d-cryptdata0 /dev/disk/by-id/dm-name-cryptdata0 Size:800149299200 UUID: Serial: Type:crypt Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/cryptdata0 KernelName:dm-2 Encrypted:false}
2025-05-18 17:46:03.590759 D | cephosd: &{Name:dm-3 Parent: HasChildren:false DevLinks:/dev/disk/by-id/dm-uuid-LVM-vKzIc6Z2Lg9D1YEwNm1aeKUkzwLYfUUQs1F2H7PIjBzm6TOuMAAKBenDO5cd9h4c /dev/disk/by-id/dm-name-vgroot-home /dev/disk/by-uuid/6f49edee-9ef4-41f0-a87f-4cc62304bbaf /dev/mapper/vgroot-home /dev/vgroot/home Size:8589934592 UUID: Serial: Type:lvm Rotational:false Readonly:false Partitions:[] Filesystem:ext4 Mountpoint:home Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/mapper/vgroot-home KernelName:dm-3 Encrypted:false}
2025-05-18 17:46:03.590778 D | cephosd: &{Name:nvme0n1p1 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part1 /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part1 /dev/disk/by-partlabel/EFI\x20system\x20partition /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part1 /dev/disk/by-uuid/4AAA-4FB4 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part1 /dev/disk/by-partuuid/01813989-e41f-4d45-86d7-7e25abde893f Size:1073741824 UUID: Serial:KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:vfat Mountpoint:efi Vendor: Model:KBG40ZNS256G NVMe TOSHIBA 256GB WWN:eui.01000000000000008ce38e040021ad4b WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p1 KernelName:nvme0n1p1 Encrypted:false}
2025-05-18 17:46:03.590786 D | cephosd: &{Name:nvme0n1p2 Parent:nvme0n1 HasChildren:false DevLinks:/dev/disk/by-partuuid/549319a0-62cb-4db4-84c3-3d414c411f8c /dev/disk/by-path/pci-0000:02:00.0-nvme-1-part2 /dev/disk/by-partlabel/Linux\x20filesystem /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1-part2 /dev/disk/by-uuid/c5d7e8c9-a443-4c16-8fc8-44cab2369af0 /dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN-part2 /dev/disk/by-id/nvme-eui.01000000000000008ce38e040021ad4b-part2 Size:254985371648 UUID: Serial:KBG40ZNS256G_NVMe_TOSHIBA_256GB_89NPD108PQEN_1 Type:part Rotational:false Readonly:false Partitions:[] Filesystem:crypto_LUKS Mountpoint: Vendor: Model:KBG40ZNS256G NVMe TOSHIBA 256GB WWN:eui.01000000000000008ce38e040021ad4b WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nvme0n1p2 KernelName:nvme0n1p2 Encrypted:false}
2025-05-18 17:46:03.590810 D | cephosd: &{Name:nbd8 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/23 Size:0 UUID:11c10883-3e3c-42f8-94a9-15a0cca5604c Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd8 KernelName:nbd8 Encrypted:false}
2025-05-18 17:46:03.590831 D | cephosd: &{Name:nbd9 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/24 Size:0 UUID:413aab43-a5fb-43ed-8c81-be389c7b707d Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd9 KernelName:nbd9 Encrypted:false}
2025-05-18 17:46:03.590859 D | cephosd: &{Name:nbd10 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/25 Size:0 UUID:3209e73d-417b-453f-a0b9-7dac2c3cfc6d Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd10 KernelName:nbd10 Encrypted:false}
2025-05-18 17:46:03.590883 D | cephosd: &{Name:nbd11 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/26 Size:0 UUID:4b04474c-fc00-4cf5-a616-7bb74424a1a5 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd11 KernelName:nbd11 Encrypted:false}
2025-05-18 17:46:03.590905 D | cephosd: &{Name:nbd12 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/27 Size:0 UUID:c773ea59-4309-4a96-b37d-88ea88f566a1 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd12 KernelName:nbd12 Encrypted:false}
2025-05-18 17:46:03.590927 D | cephosd: &{Name:nbd13 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/28 Size:0 UUID:16edbd81-6c24-4712-8b44-61d24897f234 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd13 KernelName:nbd13 Encrypted:false}
2025-05-18 17:46:03.590947 D | cephosd: &{Name:nbd14 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/29 Size:0 UUID:10bfe7f7-cb3e-4a91-ad1c-1c0704bb65bf Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd14 KernelName:nbd14 Encrypted:false}
2025-05-18 17:46:03.590972 D | cephosd: &{Name:nbd15 Parent: HasChildren:false DevLinks:/dev/disk/by-diskseq/30 Size:0 UUID:1b23b721-1d95-4889-b933-353d6f0b59a4 Serial: Type:disk Rotational:false Readonly:false Partitions:[] Filesystem: Mountpoint: Vendor: Model: WWN: WWNVendorExtension: Empty:false CephVolumeData: RealPath:/dev/nbd15 KernelName:nbd15 Encrypted:false}
2025-05-18 17:46:03.590989 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591045 E | cephosd: skipping device "nbd0", failed to get OSD information. failed to read signature from "nbd0". EOF
2025-05-18 17:46:03.591050 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591076 E | cephosd: skipping device "nbd1", failed to get OSD information. failed to read signature from "nbd1". EOF
2025-05-18 17:46:03.591095 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591134 E | cephosd: skipping device "nbd2", failed to get OSD information. failed to read signature from "nbd2". EOF
2025-05-18 17:46:03.591139 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591160 E | cephosd: skipping device "nbd3", failed to get OSD information. failed to read signature from "nbd3". EOF
2025-05-18 17:46:03.591164 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591185 E | cephosd: skipping device "nbd4", failed to get OSD information. failed to read signature from "nbd4". EOF
2025-05-18 17:46:03.591203 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591242 E | cephosd: skipping device "nbd5", failed to get OSD information. failed to read signature from "nbd5". EOF
2025-05-18 17:46:03.591262 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591298 E | cephosd: skipping device "nbd6", failed to get OSD information. failed to read signature from "nbd6". EOF
2025-05-18 17:46:03.591302 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591324 E | cephosd: skipping device "nbd7", failed to get OSD information. failed to read signature from "nbd7". EOF
2025-05-18 17:46:03.591343 I | cephosd: skipping device "dm-0" because it contains a filesystem "LVM2_member"
2025-05-18 17:46:03.591360 I | cephosd: skipping device "dm-1" with mountpoint "rootfs"
2025-05-18 17:46:03.591376 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:03.591771 D | exec: Running command: lsblk /dev/mapper/cryptdata0 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2025-05-18 17:46:03.596181 D | sys: lsblk output: "SIZE=\"800149299200\" ROTA=\"0\" RO=\"0\" TYPE=\"crypt\" PKNAME=\"\" NAME=\"/dev/mapper/cryptdata0\" KNAME=\"/dev/dm-2\" MOUNTPOINT=\"\" FSTYPE=\"\""
2025-05-18 17:46:03.596203 D | exec: Running command: ceph-volume inventory --format json /dev/mapper/cryptdata0
2025-05-18 17:46:04.624899 I | cephosd: skipping device "dm-2": ["Device type is not acceptable. It should be raw device or partition"].
2025-05-18 17:46:04.624911 I | cephosd: skipping device "dm-3" with mountpoint "home"
2025-05-18 17:46:04.624930 I | cephosd: skipping device "nvme0n1p1" with mountpoint "efi"
2025-05-18 17:46:04.624933 I | cephosd: skipping device "nvme0n1p2" because it contains a filesystem "crypto_LUKS"
2025-05-18 17:46:04.624936 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625017 E | cephosd: skipping device "nbd8", failed to get OSD information. failed to read signature from "nbd8". EOF
2025-05-18 17:46:04.625022 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625060 E | cephosd: skipping device "nbd9", failed to get OSD information. failed to read signature from "nbd9". EOF
2025-05-18 17:46:04.625065 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625098 E | cephosd: skipping device "nbd10", failed to get OSD information. failed to read signature from "nbd10". EOF
2025-05-18 17:46:04.625102 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625216 E | cephosd: skipping device "nbd11", failed to get OSD information. failed to read signature from "nbd11". EOF
2025-05-18 17:46:04.625225 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625303 E | cephosd: skipping device "nbd12", failed to get OSD information. failed to read signature from "nbd12". EOF
2025-05-18 17:46:04.625308 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625449 E | cephosd: skipping device "nbd13", failed to get OSD information. failed to read signature from "nbd13". EOF
2025-05-18 17:46:04.625457 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625494 E | cephosd: skipping device "nbd14", failed to get OSD information. failed to read signature from "nbd14". EOF
2025-05-18 17:46:04.625498 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2025-05-18 17:46:04.625534 E | cephosd: skipping device "nbd15", failed to get OSD information. failed to read signature from "nbd15". EOF
2025-05-18 17:46:04.628587 I | cephosd: configuring osd devices: {"Entries":{}}
2025-05-18 17:46:04.628613 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2025-05-18 17:46:04.628795 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list --format json
2025-05-18 17:46:04.931252 D | cephosd: {}
2025-05-18 17:46:04.931301 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2025-05-18 17:46:04.931318 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2025-05-18 17:46:09.624695 D | cephosd: {}
2025-05-18 17:46:09.624715 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2025-05-18 17:46:09.624721 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "lab05"
lsblk output
$ lsblk --noempty
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 745.2G 0 disk
└─cryptdata0 254:3 0 745.2G 0 crypt
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /efi
└─nvme0n1p2 259:2 0 237.5G 0 part
└─cryptroot 254:0 0 237.5G 0 crypt
├─vgroot-root 254:1 0 32G 0 lvm /
└─vgroot-home 254:2 0 8G 0 lvm /home
Digging through the code a little, it looks like the error might be coming from Ceph's own is_acceptable_device check here. That's where we end up after the OSD prepare daemon's getAvailableDevices method calls CheckIfDeviceAvailable here, which ends up running ceph-volume inventory here.
There's no obvious place where any of this has changed recently at all, but I'll look through it again tomorrow.
Thanks for looking into it and for the report. I’ve come down with a stomach bug, so I haven’t been able to look into things properly yet. I’ll get to it starting tomorrow.
I'm trying to reproduce your problem. Please wait for a while.
@chrisblatt
I reproduced your problem. In addition, when I created that table, this configuration wasn't supported. I'm so sorry. I'll fix that table later.
To get around this issue I was using LVM to create an unformatted LV to pass to Ceph, but then I noticed some poor fio benchmarking so wanted to create a metadata device, but ran into an error and found in the configuration that is not supported if the OSD device is lvm.
My requirements are full disk encryption if possible or could look at just ceph data encryption but didn't see in the docs how to configure that with rook.
I'll look into it. Please let me clarify your requirements. Is my below understanding correct?
- OSDs must be encrypted. You don't mind whether encrypted devices are formatted by you or by Ceph.
- OSDs must be with metadata devices to avoid poor performance.
@satoru-takeuchi your understanding is correct, we have some sites using HDDs and my thinking was a journal/metadata device would help mitigate some of their performance issues
I'm trying to reproduce your problem. Please wait for a while.
Still in progress...
I succeeded in making OSD on top of crypt device (encrypted by myself) on pvc storage cluster. Is this way acceptable for you? Or you'd liike to create such OSDs on host storage cluster which you currently use?
# Used for the OSD's data device
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-osd-data
labels:
disk-type: crypt
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Block
local:
path: /dev/mapper/crypt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
# Used for the OSD's metadata device
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-osd-metadata
labels:
disk-type: loop
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Block
local:
path: /dev/loop1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph # namespace:cluster
spec:
dataDirHostPath: /var/lib/rook
mon:
count: 1
allowMultiplePerNode: false
cephVersion:
image: quay.io/ceph/ceph:v19.2.1
allowUnsupported: false
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
mgr:
count: 1
dashboard:
enabled: false
crashCollector:
disable: true
storage:
storageClassDeviceSets:
- name: set1
count: 1
portable: false
encrypted: false
placement:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-osd
- rook-ceph-osd-prepare
preparePlacement:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-osd
- key: app
operator: In
values:
- rook-ceph-osd-prepare
topologyKey: kubernetes.io/hostname
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 5Gi
storageClassName: manual
volumeMode: Block
accessModes:
- ReadWriteOnce
selector:
matchLabels:
disk-type: crypt
- metadata:
name: metadata
spec:
resources:
requests:
storage: 5Gi
storageClassName: manual
volumeMode: Block
accessModes:
- ReadWriteOnce
selector:
matchLabels:
disk-type: loop
onlyApplyOSDPlacement: false
That would work for me, not tied to how I was doing it before. Is there any obvious risks/caveats to PV based OSDs?
No obvious risk. Since this way need to create PVs before creating OSDs, the operation cost might be a bit high. To reduce this cost, local-static-provisioner or similer tools would help you.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
If I understood correctly LUKS encrypted block devices are not supported with a Host Storage Cluster? I wanted to use it that way without PV, but maybe encrypted LVM devices are supported?
If I understood correctly LUKS encrypted block devices are not supported with a Host Storage Cluster?
Corretct.
I wanted to use it that way without PV, but maybe encrypted LVM devices are supported?
No, this configuration is not supported too.
Please see also the table describing supported configurations.