[cinder-csi-plugin] support capacity in CSI
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug /kind feature
What happened: https://github.com/kubernetes-csi/external-provisioner#capacity-support will be beta since 1.21, we need consider how to report capacity through existing/new to be added interface
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
- openstack-cloud-controller-manager(or other related binary) version:
- OpenStack version:
- Others:
/assign
looks like https://docs.openstack.org/api-ref/block-storage/v3/index.html
there is no suitable API for volume capacity retrieval .. need further check
GET over /v3/{project_id}/volumes/{volume_id} and get the size field of returned json (Gib)
but that's for one volume ... I thought the purpose of the CSI call is to return the total amount of the pool?
I re-read the doc... sems that is CSI need to get total capacity from the node... in case of Cinder will be the total capacity to use... but if you use the quota max of the OpenStack project? all the OpenStack projects has the volumes count and capacity limit quotas.
but if you use the quota max of the OpenStack project? all the OpenStack projects has the volumes count and capacity limit quotas.
that's a valuable suggestion :), there might be some edge case ,for example, only 1T disks but quota is 2T then report 2T seems weird .. @ramineni @lingxiankong any comments?
Well i think that if I as the OpenStack manager put a 2T quota over 1T disk is a bad config from me, the CSI can't do magic jejej.... another approach is use the compute API and get the storage of nodes... but this is very very mix, for example in my env i use Ceph with OpenStack, so all compute nodes has the same capacity.... but others drivers can get differents storage sizes per node... but in situ the most important info is the limits of the project.
ok, maybe need check other cloud provider like AWS / Azure how they handle it as they can't report total EBS :)
after another thought I think it might be reasonable to give the storage quota here https://docs.openstack.org/cinder/latest/cli/cli-cinder-quotas.html
let me submit one PR based on this
another thought: from CSI spec seems they are expecting physical resource instead of logical resource
as tenant quota is more about logical ..if we use tenant quota, those params/optins listed below won't be honored any more..
https://github.com/container-storage-interface/spec/blob/master/lib/go/csi/csi.pb.go
type GetCapacityRequest struct {
// If specified, the Plugin SHALL report the capacity of the storage
// that can be used to provision volumes that satisfy ALL of the
// specified `volume_capabilities`. These are the same
// `volume_capabilities` the CO will use in `CreateVolumeRequest`.
// This field is OPTIONAL.
VolumeCapabilities []*VolumeCapability `protobuf:"bytes,1,rep,name=volume_capabilities,json=volumeCapabilities,proto3" json:"volume_capabilities,omitempty"`
// If specified, the Plugin SHALL report the capacity of the storage
// that can be used to provision volumes with the given Plugin
// specific `parameters`. These are the same `parameters` the CO will
// use in `CreateVolumeRequest`. This field is OPTIONAL.
Parameters map[string]string `protobuf:"bytes,2,rep,name=parameters,proto3" json:"parameters,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
// If specified, the Plugin SHALL report the capacity of the storage
// that can be used to provision volumes that in the specified
// `accessible_topology`. This is the same as the
// `accessible_topology` the CO returns in a `CreateVolumeResponse`.
// This field is OPTIONAL. This field SHALL NOT be set unless the
// plugin advertises the VOLUME_ACCESSIBILITY_CONSTRAINTS capability.
But, check this essenario... you are a user of a Tenant of OpenStack, you dont have access to see the real physical resource of the cloud... then you deploy some virtual instances on the tenant with kubernetes and your "physical" storage is that one quota show to you... you can't see, for example, the real physical hard disks of the internal ceph used by the cloud.
So the "physical" in this case i think that is relative.
ok, let me give a try to the quota and see whether guys has any comments
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.