lvm-localpv
lvm-localpv copied to clipboard
Empty thin pool LV seems to be taken into account for capacity monitoring
What steps did you take and what happened:
- I have a 10GB PV and an initially empty VG (no LVs)
- I configure a StorageClass to use Thin Provisioning
- I start one Pod with two volumes of 8GB each
- a Thin Pool LV is created (size 8GB) and two LVs inside it for each Pod volume
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vg-workersworkers_thinpool vg-workersworkers twi-aotz-- 8,00g 3,86 12,70
pvc-45c18a20-e055-4264-afa5-f128816ea309 vg-workersworkers Vwi-aotz-- 8,00g vg-workersworkers_thinpool 1,97
pvc-712624e8-c1c2-4026-9773-172f296359a9 vg-workersworkers Vwi-aotz-- 8,00g vg-workersworkers_thinpool 1,89
- after the Pod is deleted, the last two LVs are removed, but the Thin Pool LV remains, which look ok.
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vg-workersworkers_thinpool vg-workersworkers twi-aotz-- 8,00g 0,00 10,79
- when I try to start another Pod with the same requirements, Kubernetes complains that there is not enough free space in the node.
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-10-25T17:30:26Z"
message: '0/2 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
- Indeed, the Node storage capacity is reported as only ~2GB free. It looks like the Thin Pool LV is being taken into account as used space, which I believe is wrong.
$ kubectl -n kube-system logs openebs-lvm-controller-0
[...]
I1025 16:59:50.516742 1 grpc.go:81] GRPC response: {"available_capacity":2126512128}
[...]
$ kubectl -n openebs get lvmnodes worker-1 -oyaml
apiVersion: local.openebs.io/v1alpha1
kind: LVMNode
metadata:
creationTimestamp: "2022-10-24T14:13:26Z"
generation: 12
name: worker-1
namespace: openebs
ownerReferences:
- apiVersion: v1
controller: true
kind: Node
name: worker-1
uid: 7e8c75dc-5491-4884-a709-c693e7835490
resourceVersion: "390350"
uid: 4f545eaa-b2c4-4a82-bba9-244acc37cba3
volumeGroups:
- allocationPolicy: 0
free: 2028Mi
lvCount: 1
maxLv: 0
maxPv: 0
metadataCount: 1
metadataFree: 507Ki
metadataSize: 1020Ki
metadataUsedCount: 1
missingPvCount: 0
name: lvm-volumegroup-kumori-workers
permissions: 0
pvCount: 1
size: 10236Mi
snapCount: 0
uuid: 6gd0JS-RoSk-TYN2-YptT-kCmo-22pg-RRGIlR
$ kubectl -n kube-system get csistoragecapacities csisc-nbcxt -oyaml
apiVersion: storage.k8s.io/v1beta1
kind: CSIStorageCapacity
metadata:
creationTimestamp: "2022-10-24T14:13:44Z"
generateName: csisc-
labels:
csi.storage.k8s.io/drivername: local.csi.openebs.io
csi.storage.k8s.io/managed-by: external-provisioner
name: csisc-nbcxt
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
controller: true
kind: StatefulSet
name: openebs-lvm-controller
uid: 43a9720c-801f-46b7-b6ae-4f819e5a18a2
resourceVersion: "390055"
uid: 40236777-92e9-4460-a735-b205ade6ffe9
capacity: 2028Mi
nodeTopology:
matchLabels:
kubernetes.io/hostname: worker-1
openebs.io/nodename: worker-1
storageClassName: openebs-locallvm
What did you expect to happen: I expected that the node is reported as having 10GB of free space, since no "real" volumes exist, only the Thin Pool LV. Otherwise, I can't deploy the same Pod again, event if the disk space is free.
Environment:
- LVM Driver version: tested with 0.8 and 1.0.0
- Kubernetes version: 1.21.10
- Kubernetes installer & version: kubeadm
- Cloud provider or hardware configuration: baremetal
- OS: Ubuntu 20.04.3
Still same, if left capacity not enough, the k8s will stuck for not enough space, can't allocate the thin volume.