Allow volume expansion / resizing an in-use pvc
Is your feature request related to a problem? Please describe. When trying to change the requested storage of a PVC I'm getting:
Error from server (Forbidden): persistentvolumeclaims "my-example-pvc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Since the EFS SC is dynamically provisioning volumes I am assuming that resizing volumes is supported as well. Neither the aws efs csi driver docs nor this repo does document the current state of being able to dynamically resize volumes anywhere.
Hence I'm not even entirely sure whether aws-efs-csi-driver is supposed to support this right now and I'm having issues due to miss configuration or whether the driver currently does not support dynamic resizing of PVs.
Describe the solution you'd like in detail
I'd like the provisioned SC to have allowVolumeExpansion set to true.
Describe alternatives you've considered none
Additional context
- k8s docs about SC volume expansion
- k8s docs about resizing in-use PVCs
- k8s blog post about resizing PVs at k8s version 1.11
Deployed via Helm, currently using chart version 2.1.5 (app version 1.3.3).
Steps to reproduce
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-example-pvc
spec:
storageClassName: your-efs-sc-name
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Mi
EOF
After the PVC has been provisioned successfully, run:
kubectl patch pvc my-example-pvc -p '{"spec": {"resources": {"requests": {"storage": "16Mi"}}}}'
I am not from the project, but this is a quote from https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/
Note that the actual storage capacity value in the persistent volume claim is not used, given the elastic capabilities of EFS. However, since the storage capacity is a required field in Kubernetes, you must specify a value.
I think this has to be discussed and it is a matter of taste, but in my opinion it would be less confusing to have allowVolumeExpansion set to true. Even if the expansion has no real effect on the volume.
I am not from the project, but this is a quote from https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/
Note that the actual storage capacity value in the persistent volume claim is not used, given the elastic capabilities of EFS. However, since the storage capacity is a required field in Kubernetes, you must specify a value.
I think this has to be discussed and it is a matter of taste, but in my opinion it would be less confusing to have allowVolumeExpansion set to true. Even if the expansion has no real effect on the volume.
So if I'm understanding you correctly what you're saying is that the specified limits are not enforced?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.