cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[cinder-csi-plugin] Support for formatOptions
/kind feature
What happened: We created a cinder pvc of 1GB and wrote 66000 small files to it, run out of inodes
What you expected to happen: This is caused by inode ratio of ext4 fs being used by cinder-csi We can control formatOptions through the StorageClass
How to reproduce it: create storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cinder
parameters:
availability: nova
provisioner: kubernetes.io/cinder
reclaimPolicy: Delete
volumeBindingMode: Immediate
create pvc write 66000 files to the pvc df -i
Anything else we need to know?: similar implementation of options: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/48a4f6d713633fc9e79187f06168985d3d463f5a/pkg/driver/node.go#L265
Alternative suggestions to work around this issue (different pvc size, mountoptions,...) are welcome
I think we can use this function to implement https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/48a4f6d713633fc9e79187f06168985d3d463f5a/pkg/driver/node.go#L913
but does this belong to a CSI general feature e.g external-provisioner or attacher ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.