cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[cinder-csi-plugin] support capacity in CSI

Open jichenjc opened this issue 4 years ago • 22 comments

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug /kind feature

What happened: https://github.com/kubernetes-csi/external-provisioner#capacity-support will be beta since 1.21, we need consider how to report capacity through existing/new to be added interface

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager(or other related binary) version:
  • OpenStack version:
  • Others:

jichenjc avatar Apr 08 '21 02:04 jichenjc

/assign

jichenjc avatar Apr 08 '21 02:04 jichenjc

looks like https://docs.openstack.org/api-ref/block-storage/v3/index.html

there is no suitable API for volume capacity retrieval .. need further check

jichenjc avatar Apr 09 '21 01:04 jichenjc

GET over /v3/{project_id}/volumes/{volume_id} and get the size field of returned json (Gib)

lfdominguez avatar Apr 11 '21 19:04 lfdominguez

but that's for one volume ... I thought the purpose of the CSI call is to return the total amount of the pool?

jichenjc avatar Apr 12 '21 00:04 jichenjc

I re-read the doc... sems that is CSI need to get total capacity from the node... in case of Cinder will be the total capacity to use... but if you use the quota max of the OpenStack project? all the OpenStack projects has the volumes count and capacity limit quotas.

lfdominguez avatar Apr 12 '21 00:04 lfdominguez

but if you use the quota max of the OpenStack project? all the OpenStack projects has the volumes count and capacity limit quotas.

that's a valuable suggestion :), there might be some edge case ,for example, only 1T disks but quota is 2T then report 2T seems weird .. @ramineni @lingxiankong any comments?

jichenjc avatar Apr 12 '21 01:04 jichenjc

Well i think that if I as the OpenStack manager put a 2T quota over 1T disk is a bad config from me, the CSI can't do magic jejej.... another approach is use the compute API and get the storage of nodes... but this is very very mix, for example in my env i use Ceph with OpenStack, so all compute nodes has the same capacity.... but others drivers can get differents storage sizes per node... but in situ the most important info is the limits of the project.

lfdominguez avatar Apr 12 '21 01:04 lfdominguez

ok, maybe need check other cloud provider like AWS / Azure how they handle it as they can't report total EBS :)

after another thought I think it might be reasonable to give the storage quota here https://docs.openstack.org/cinder/latest/cli/cli-cinder-quotas.html

let me submit one PR based on this

jichenjc avatar Apr 13 '21 02:04 jichenjc

another thought: from CSI spec seems they are expecting physical resource instead of logical resource as tenant quota is more about logical ..if we use tenant quota, those params/optins listed below won't be honored any more..

https://github.com/container-storage-interface/spec/blob/master/lib/go/csi/csi.pb.go

type GetCapacityRequest struct {
	// If specified, the Plugin SHALL report the capacity of the storage
	// that can be used to provision volumes that satisfy ALL of the
	// specified `volume_capabilities`. These are the same
	// `volume_capabilities` the CO will use in `CreateVolumeRequest`.
	// This field is OPTIONAL.
	VolumeCapabilities []*VolumeCapability `protobuf:"bytes,1,rep,name=volume_capabilities,json=volumeCapabilities,proto3" json:"volume_capabilities,omitempty"`
	// If specified, the Plugin SHALL report the capacity of the storage
	// that can be used to provision volumes with the given Plugin
	// specific `parameters`. These are the same `parameters` the CO will
	// use in `CreateVolumeRequest`. This field is OPTIONAL.
	Parameters map[string]string `protobuf:"bytes,2,rep,name=parameters,proto3" json:"parameters,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
	// If specified, the Plugin SHALL report the capacity of the storage
	// that can be used to provision volumes that in the specified
	// `accessible_topology`. This is the same as the
	// `accessible_topology` the CO returns in a `CreateVolumeResponse`.
	// This field is OPTIONAL. This field SHALL NOT be set unless the
	// plugin advertises the VOLUME_ACCESSIBILITY_CONSTRAINTS capability.

jichenjc avatar Apr 13 '21 09:04 jichenjc

But, check this essenario... you are a user of a Tenant of OpenStack, you dont have access to see the real physical resource of the cloud... then you deploy some virtual instances on the tenant with kubernetes and your "physical" storage is that one quota show to you... you can't see, for example, the real physical hard disks of the internal ceph used by the cloud.

So the "physical" in this case i think that is relative.

lfdominguez avatar Apr 13 '21 12:04 lfdominguez

ok, let me give a try to the quota and see whether guys has any comments

jichenjc avatar Apr 15 '21 01:04 jichenjc

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 14 '21 01:07 fejta-bot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 13 '21 02:08 k8s-triage-robot

/remove-lifecycle rotten

ramineni avatar Aug 23 '21 10:08 ramineni

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 21 '21 10:11 k8s-triage-robot

/remove-lifecycle stale

ramineni avatar Nov 24 '21 03:11 ramineni

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 22 '22 04:02 k8s-triage-robot

/remove-lifecycle stale

ramineni avatar Feb 22 '22 05:02 ramineni

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 23 '22 06:05 k8s-triage-robot

/remove-lifecycle stale

jichenjc avatar May 23 '22 06:05 jichenjc

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 21 '22 07:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 20 '22 07:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 20 '22 07:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 20 '22 07:10 k8s-ci-robot