dynamic-nfs-provisioner
dynamic-nfs-provisioner copied to clipboard
Issue with NFS provisioner
Hello, i have installed k8s cluster with 3 master and 7 worker nodes (name of nodes, like this: k8s-node1 - k8s-node6 and powerbi) on k8s-node4, k8s-node5, k8s-node6, i added disks and installed openebs with cstor then installed openebs dynamic-nfs-provisioner with helm and set default backend cstor. its was working, 2 days, but i cant to do ls in mount folder, then i reinstalled it again and its go to error state:
example for test
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pvc
spec:
storageClassName: openebs-kernel-nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
nodeSelector:
kubernetes.io/hostname: k8s-node4
containers:
- name: mongo
image: mongo:3.6.17-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: nfs-pvc
logs
nfs-pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27-85d9d9ff7b-49db9.16d0b19997a7c268
FailedMount
Unable to attach or mount volumes: unmounted volumes=[exports-dir], unattached volumes=[kube-api-access-tkpsj exports-dir]: timed out waiting for the condition
nfs-pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27-85d9d9ff7b-49db9.16d0b17fb6a82632
FailedMount
MountVolume.MountDevice failed for volume "pvc-22237728-5a18-4c37-8d59-f088745a14af" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
-
kubectl get pods -n <openebs_namespace> --show-labels
NAME READY STATUS RESTARTS AGE LABELS
cstor-disk-pool1-9r9x-6b4d87f49b-7xd7l 3/3 Running 3 (37d ago) 52d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool1,openebs.io/cstor-pool-instance=cstor-disk-pool1-9r9x,openebs.io/version=3.0.0,pod-template-hash=6b4d87f49b
cstor-disk-pool1-hzg5-6b79f9998f-xp2c2 3/3 Running 3 (37d ago) 52d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool1,openebs.io/cstor-pool-instance=cstor-disk-pool1-hzg5,openebs.io/version=3.0.0,pod-template-hash=6b79f9998f
cstor-disk-pool1-z9bb-75c7b65f47-77n8h 3/3 Running 0 8d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool1,openebs.io/cstor-pool-instance=cstor-disk-pool1-z9bb,openebs.io/version=3.0.0,pod-template-hash=75c7b65f47
cstor-disk-pool2-j2dh-58fdcfcfff-6txm2 3/3 Running 3 (37d ago) 52d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool2,openebs.io/cstor-pool-instance=cstor-disk-pool2-j2dh,openebs.io/version=3.0.0,pod-template-hash=58fdcfcfff
cstor-disk-pool2-jsww-665fd66759-g289l 3/3 Running 0 8d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool2,openebs.io/cstor-pool-instance=cstor-disk-pool2-jsww,openebs.io/version=3.0.0,pod-template-hash=665fd66759
cstor-disk-pool2-kr9c-7bd97c946f-l96lg 3/3 Running 3 (37d ago) 52d app=cstor-pool,openebs.io/cstor-pool-cluster=cstor-disk-pool2,openebs.io/cstor-pool-instance=cstor-disk-pool2-kr9c,openebs.io/version=3.0.0,pod-template-hash=7bd97c946f
nfs-pvc-19d0f9ca-5475-44a8-84e6-dc8a59d095a5-84759bd48b-7gd2x 1/1 Running 0 7d8h openebs.io/nfs-server=nfs-pvc-19d0f9ca-5475-44a8-84e6-dc8a59d095a5,pod-template-hash=84759bd48b
nfs-pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27-85d9d9ff7b-49db9 0/1 ContainerCreating 0 12m openebs.io/nfs-server=nfs-pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27,pod-template-hash=85d9d9ff7b
nfs-pvc-884284a8-7ed5-4876-8d9f-f53ca19c8017-7765c88c4-zrsbg 1/1 Running 0 8d openebs.io/nfs-server=nfs-pvc-884284a8-7ed5-4876-8d9f-f53ca19c8017,pod-template-hash=7765c88c4
nfs-pvc-8eb1e4e3-8ce5-4951-b522-e55e320ab32c-5b5c789bfd-rhsld 1/1 Running 0 8d openebs.io/nfs-server=nfs-pvc-8eb1e4e3-8ce5-4951-b522-e55e320ab32c,pod-template-hash=5b5c789bfd
nfs-pvc-b71d8186-8a72-4cc8-81f7-19b2d64e67f4-dc95dddd6-2txtv 1/1 Running 0 8d openebs.io/nfs-server=nfs-pvc-b71d8186-8a72-4cc8-81f7-19b2d64e67f4,pod-template-hash=dc95dddd6
nfs-pvc-d05e7993-83c9-489f-a38c-c0b38c89402f-6ccd46fd9d-s94kd 1/1 Running 0 8d openebs.io/nfs-server=nfs-pvc-d05e7993-83c9-489f-a38c-c0b38c89402f,pod-template-hash=6ccd46fd9d
openebs-cstor-admission-server-5754659f4b-5hkmq 1/1 Running 1 (37d ago) 52d app=cstor-admission-webhook,chart=cstor-3.0.2,component=cstor-admission-webhook,heritage=Helm,openebs.io/component-name=cstor-admission-webhook,openebs.io/version=3.0.0,pod-template-hash=5754659f4b,release=openebs
-
kubectl get pvc -n <openebs_namespace>
kubectl get pvc -n openebs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc-19d0f9ca-5475-44a8-84e6-dc8a59d095a5 Bound pvc-05044693-9b69-4a99-91a1-815ff950851a 1Gi RWO cstor-csi-disk1 7d8h
nfs-pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27 Bound pvc-22237728-5a18-4c37-8d59-f088745a14af 5Gi RWO cstor-csi-disk1 13m
nfs-pvc-884284a8-7ed5-4876-8d9f-f53ca19c8017 Bound pvc-61a30045-5c4c-4f97-8974-db733100c594 3Gi RWO cstor-csi-disk1 8d
nfs-pvc-8eb1e4e3-8ce5-4951-b522-e55e320ab32c Bound pvc-97b8d215-adac-4bfd-9df9-c74ac71d6ddb 10Gi RWO cstor-csi-disk1 8d
nfs-pvc-b71d8186-8a72-4cc8-81f7-19b2d64e67f4 Bound pvc-9f2e94ab-57f8-4db0-a6f1-1fb2ba13fae8 100Gi RWO cstor-csi-disk1 8d
nfs-pvc-d05e7993-83c9-489f-a38c-c0b38c89402f Bound pvc-4b31f1dc-0cc7-4456-8621-d062e7141a73 3Gi RWO cstor-csi-disk1 8d
-
kubectl get pvc -n <application_namespace>
[root@k8s-master1 openebs]# kubectl get pvc -n default
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound pvc-23ee81ed-cb93-4727-9319-45a76e2b2e27 5Gi RWO openebs-kernel-nfs 14m
nfs-pvc1 Bound pvc-1558b073-ab7e-4f40-bb8f-8fd7fe6f5336 6Gi RWO cstor-csi-disk2 14m
Environment details:
- OpenEBS version (use
kubectl get po -n openebs --show-labels
): - https://pastebin.com/3a9i88uZ
- Kubernetes version (use
kubectl version
): - 1.23.0
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release
): - centos8stream
- kernel (e.g:
uname -a
): - Linux k8s-master1 5.15.2-1.el8.elrepo.x86_64 #1 SMP Wed Nov 10 18:10:59 EST 2021 x86_64 x86_64 x86_64 GNU/Linux
- others:
From logs, nfs server pod is not able to mount the backing PV. Can you check the status of CVR for backing PV? For successful mount of backing PV, all CVR should be in healthy state.
I understand this has been open for quite some time, but I'd like to check back on this. Is this issue resolved? From the above info provided, one of the nfs pvc pod is in ContainerCreating
state and that is possibly the backing PV which app is trying to mount, in turn getting rpc request timeout.
hello, didnt solved this, moved to nfs from external provider