nfs-subdir-external-provisioner
nfs-subdir-external-provisioner copied to clipboard
ExternalProvisioning persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
Hi, I am trying to create a pvc but I get this message:
$ kubectl describe pvc idlcgrafana-pv
Name: idlcgrafana-pv
Namespace: default
StorageClass: managed-nfs-storage
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: idlcgrafana-6c5c75744b-t6fpr
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 14s (x2 over 29s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
Normal ExternalProvisioning 8s (x3 over 36s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
These are the storage class, provisioner status and pvc:
SC:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
onDelete: delete
Provisioner pod:
$ kubectl describe po nfs-subdir-external-provisioner-7d659d475-bs7g4
Name: nfs-subdir-external-provisioner-7d659d475-bs7g4
Namespace: default
Priority: 0
Node: gke-px-sync-default-pool-5afcf177-zl6m/10.142.0.16
Start Time: Tue, 22 Jun 2021 14:29:00 -0600
Labels: app=nfs-subdir-external-provisioner
pod-template-hash=7d659d475
release=nfs-subdir-external-provisioner
Annotations: <none>
Status: Running
IP: 10.80.0.15
IPs:
IP: 10.80.0.15
Controlled By: ReplicaSet/nfs-subdir-external-provisioner-7d659d475
Containers:
nfs-subdir-external-provisioner:
Container ID: docker://246460c8c6ed3f8f6fd2e05f9bcc713e4e3d3d2b2d0c3702152f1c6b26accad2
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Image ID: docker-pullable://k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 22 Jun 2021 14:29:06 -0600
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: cluster.local/nfs-subdir-external-provisioner
NFS_SERVER: 10.142.0.18
NFS_PATH: /home/borch
Mounts:
/persistentvolumes from nfs-subdir-external-provisioner-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from nfs-subdir-external-provisioner-token-25g9c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-subdir-external-provisioner-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.142.0.18
Path: /home/borch
ReadOnly: false
nfs-subdir-external-provisioner-token-25g9c:
Type: Secret (a volume populated by a Secret)
SecretName: nfs-subdir-external-provisioner-token-25g9c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: idlcgrafana-pv
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
I was able to mount the nfs exported folder into one of my workers nodes.
Let me know if you find something incorrect.
Cheers.
how did you install the provisioner? via helm? if so, it might be a duplicate of #107 and you need to add --set storageClass.provisionerName=k8s-sigs.io/nfs-subdir-external-provisioner
to helm install
, e.g:
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path \
--set storageClass.provisionerName=k8s-sigs.io/nfs-subdir-external-provisioner
I installed it via yaml and I seem to be stuck at the same waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
Did you get this sorted ? provisioner pod deploys no issues , when try deploy using dynamic get same error. tried setting the default storage class to no avail, moved to different namespace also, is there a role or permission missing ? if I put incorrect or invalid nfs path in provisioner path it fails so know the provisioner is able to mount nfs path and permissions no issue
I installed it via yaml and I seem to be stuck at the same waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
Logs are showing in provisioner
unexpected error getting claim reference: selfLink was empty, can't make reference
@robertofabrizi, where you able to solve your issue ?
Looks like related to https://stackoverflow.com/questions/65376314/kubernetes-nfs-provider-selflink-was-empty
Will using v4.0.0 image or above resolve this ?
My issue is resolved , reference old image , updated image to latest gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
I am also getting same message. waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator. persistentvolumeclaim/test-claim Pending managed-nfs-storage 11s Please help me
I get the same problem, and idon`t know how to solve it
I get the same problem, and idon`t know how to solve it too
I'm using latest image 4.0.2 installed using Helm, facing same issue waiting for a volume to be created , not sure what the problem is
I've been having the same issue. I know the NFS share works, and have verified it as such. Is anyone aware of a better guide for how to set this provisioner up?
I have the same issue. Trying to mount to an TrueNAS Scale NFS share. It does seem that this uses NFSv3 and I wish they would mention this somewhere. Allas, if that helps someone to resolve the issue, it would be great.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten