kubectl-df-pv
kubectl-df-pv copied to clipboard
about metrics
Hi,
For some pv, the results is strange. I see the whole storage capacity for last 3 pv instead of pv capacity. The first use rook-ceph-rbd sc, and the last 3 rook-cephfs sc. Do you think it's related to df-pv or CSI ?
NAMESPACE PVC NAME PV NAME POD NAME VOLUME MOUNT NAME SIZE USED AVAILABLE %USED IUSED IFREE %IUSED
sandbox data-drive-preprod-mariadb-galera-0 pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896 drive-preprod-mariadb-galera-0 data 1014Mi 695Mi 318Mi 68.63 304 523984 0.06
sandbox data-drive-preprod-mariadb-tooling-backup pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b drive-preprod-mariadb-tooling-restore-shell-c5c585478-wxk5m mariadb 21Gi 500Mi 21Gi 2.27 33290 18446744073709551615 100.00
sandbox drive-preprod-xxx-nextcloud-ncdata pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 bckp-nextcloud-preprod-xxx-basic-volume-bckp-restore-shellcfgjx source 21Gi 500Mi 21Gi 2.27 33290 18446744073709551615 100.00
sandbox drive-preprod-xxx-nextcloud-ncdata pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 drive-preprod-xxx-nextcloud-7b877b4d78-w6cg8 data 21Gi 500Mi 21Gi 2.27 33290 18446744073709551615 100.00
$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-drive-preprod-mariadb-galera-0 Bound pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896 1Gi RWO rook-ceph-block 35h
data-drive-preprod-mariadb-tooling-backup Bound pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b 500Mi RWX rook-cephfs 57d
drive-preprod-ird-nextcloud-ncdata Bound pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 1Gi RWX rook-cephfs 57d
$ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-02af006f-b180-4e00-b0f9-d2792b81bdf0 1Gi RWX Delete Bound sandbox/drive-preprod-xxx-nextcloud-ncdata rook-cephfs 57d
pvc-5cb1551b-8958-4c4f-8298-42c8c09ab896 1Gi RWO Retain Bound sandbox/data-drive-preprod-mariadb-galera-0 rook-ceph-block 5d22h
pvc-6ccbd327-3f9f-4c5a-a70b-3575c19d502b 500Mi RWX Delete Bound sandbox/data-drive-preprod-mariadb-tooling-backup rook-cephfs 57d
@tcoupin thanks for reporting the ticket 👏! Can you give more information by running kubectl df-pv -v trace and then searching for the specific PVs? I'm looking at inspecting the json returned from the node that has those pods (obviously get rid of any PII information if you need to)
with trace log level: df.log
@tcoupin seems like you had some auth error, unrelated to this, so these logs are not helpful. Can you try and reproduce the above mentioned exact output and send the trace logs of that transaction?
It appears to me like it is an issue with the way CephFS reports inodes. See https://tracker.ceph.com/issues/24849
Might now be solved with a new ceph version.