aws-ebs-csi-driver icon indicating copy to clipboard operation
aws-ebs-csi-driver copied to clipboard

IOPS Change

Open srfaytkn opened this issue 3 years ago • 21 comments

Hello, i created a storageclass with iopsPerGB value 10 and I have a 100 gb pvc.

1-Does it cause any bug if it dynamically increases and decreases the io value from the aws console? 2-After dynamically changing the iops value from the aws console, does a size update from the csi driver cause a bug? 3-If these are causing the problem, how can I dynamically update the iops value regardless of the volume size

thanks

/triage support

srfaytkn avatar Mar 27 '21 09:03 srfaytkn

@srfaytkn: The label(s) triage/support cannot be applied, because the repository doesn't have them.

In response to this:

Hello, i created a storageclass with iopsPerGB value 10 and I have a 100 gb pvc.

1-Does it cause any bug if it dynamically increases and decreases the io value from the aws console? 2-After dynamically changing the iops value from the aws console, does a size update from the csi driver cause a bug? 3-If these are causing the problem, how can I dynamically update the iops value regardless of the volume size

thanks

/triage support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 27 '21 09:03 k8s-ci-robot

  1. No, you should be able to update iops after the fact, driver won't try to 'reconcile' it back or anything like that.
  2. No, resize won't affect iops, the " ModifyVolume" API call only includes a size arg, no iops arg.

wongma7 avatar Apr 07 '21 22:04 wongma7

thank you for help @wongma7

srfaytkn avatar Apr 08 '21 07:04 srfaytkn

I'd expect if the iopsPerGB value is set, then increasing disk size should increase IOPS also. Would ya'll be open to me putting up a PR to scale IOPS as part of ControllerExpandVolume()?

carloruiz avatar Apr 08 '21 20:04 carloruiz

@carloruiz I didn't think of that but it makes sense.

@ayberk what do you think, should resize also update iops?

wongma7 avatar Apr 20 '21 17:04 wongma7

Yeah I think that makes sense in this case. Since iopsPerGB is set, it really isn't a surprise bill situation.

ayberk avatar Apr 20 '21 22:04 ayberk

After this changes, can I still dynamically update the iops value regardless of the volume size from the aws console? Sometimes iops increase or decrease may be required without the need for a larger disk size.

srfaytkn avatar Apr 21 '21 06:04 srfaytkn

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jul 20 '21 06:07 fejta-bot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 19 '21 07:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 18 '21 07:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 18 '21 07:09 k8s-ci-robot

Hi, I came across this issue and wanted to second @carloruiz 's proposal.

When I saw a configuration key "iopsPerGB", I didn't expect it to mean "iopsPerGB at volume creation time only". Shouldn't the desired IOPS always be set based on iopsPerGB and the actual size of the volume?

This gives the users ability to (indirectly) change IOPS from the PVC API and I think it makes sens since they can use volume expansion API on the PVC to resize the volume already.

@wongma7 @ayberk what do you think?

yuha0 avatar Jan 04 '22 05:01 yuha0

/reopen

Yeah I do still agree it makes sense. If we are worried about deviating from how this has functioned historically/in in-tree driver or surprise bill situations then it could be behind a flag but it could be the default behavior

we would need CSI changes https://github.com/container-storage-interface/spec/issues/491 to directly change IOPS so until then this seems the best we can do

wongma7 avatar Jan 05 '22 00:01 wongma7

@wongma7: Reopened this issue.

In response to this:

/reopen

Yeah I do still agree it makes sense. If we are worried about deviating from how this has functioned historically/in in-tree driver or surprise bill situations then it could be behind a flag but it could be the default behavior

we would need CSI changes https://github.com/container-storage-interface/spec/issues/491 to directly change IOPS so until then this seems the best we can do

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 05 '22 00:01 k8s-ci-robot

/remove-lifecycle rotten

wongma7 avatar Jan 05 '22 00:01 wongma7

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 05 '22 00:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 05 '22 01:05 k8s-triage-robot

/remove-lifecycle frozen

rdpsin avatar May 19 '22 21:05 rdpsin

/remove-lifecycle rotten

rdpsin avatar May 19 '22 21:05 rdpsin

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 17 '22 22:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 16 '22 22:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 16 '22 23:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 16 '22 23:10 k8s-ci-robot

Hi folks, anything new on this front? This change would be precious to make it easier when migrating from gp2 to gp3 without impacting users expecting the old behavior. This would allow a seamless migration, without the necessity to update all Helm charts and controllers at the first moment.

luanguimaraesla avatar Oct 20 '22 20:10 luanguimaraesla

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 19 '22 20:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 19 '22 20:11 k8s-ci-robot

Can the behavior here be documented?

kevincantu avatar Feb 22 '23 16:02 kevincantu