cloud-provider-azure icon indicating copy to clipboard operation
cloud-provider-azure copied to clipboard

slow disk attach relating to SetDiskLun failures

Open chmill-zz opened this issue 3 years ago • 3 comments

What happened:

azure disk attach is slow because of lun mapping issues

What you expected to happen:

disks to attach faster

How to reproduce it (as minimally and precisely as possible):

Right now I've seen this behavior reproduce after the cloud provider is throttled by Azure. When we retry to AttachDisk after the retry time is up we fail to SetDiskLun. I've seen this add 10minutes to an attach time after throttling delays.

I believe this can happen in any scenario where the AttachDisk func doesn't successfully apply/update the vmss model on the azure side (throttling is one of those scenarios).

Anything else we need to know?:

From the audit log of the cluster. We get a throttling failure. Followed by an issue setting the lun. Then we don't actually attach until ~10min later.

  "attachError": {
    "message": "rpc error: code = Unknown desc = Attach volume /subscriptions/.... to instance aks-agentpool-... failed with Retriable: true, RetryAfter: 121s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMSSVMUpdateAsync with reason \"client throttled\"",
    "time": "2022-05-13T17:23:16Z"
  }
{
  "attachError": {
    "message": "rpc error: code = Unknown desc = Attach volume /subscriptions/... to instance aks-agentpool-... failed with could not find disk(/subscriptions/...) in current disk list(len: 14) nor in diskMap(map[])",
    "time": "2022-05-13T17:24:12Z"
  }
}
 "time": "2022-05-13T17:34:06Z",
  "status": {
    "attached": true,
    "attachmentMetadata": {
      "LUN": "23"
    }

From the Azure Activity logs I can see that the PUT with the disk didn't come in until 17:34. So it's not a delay on the azure side.

Environment:

  • Kubernetes version (use kubectl version): 1.22.6
  • Cloud provider or hardware configuration: azure
  • Others: disk.csi.azure.com/v1.17.0

chmill-zz avatar May 18 '22 22:05 chmill-zz

@andyzhangx can you have a look when you have time?

nilo19 avatar Jul 08 '22 05:07 nilo19

@chmill-zz VMSSVMUpdateAsync operation was throttled, it happens when there are disk attach/detach on a lot of agent nodes simultaneously.

andyzhangx avatar Jul 08 '22 06:07 andyzhangx

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 06 '22 07:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 05 '22 07:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 05 '22 08:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 05 '22 08:12 k8s-ci-robot