external-provisioner icon indicating copy to clipboard operation
external-provisioner copied to clipboard

GRPC response DeadlineExceeded

Open ruan875417 opened this issue 11 months ago • 4 comments

What happened:

When use external-provisioner, we creat pvc, then our driver creates volume successfully, [controllerserver.go:116] CreateVolume pvc-069b89a0-271f-4d86-86bf-61608aae66d4 success. cost time:10.125894947s, it cost time more than 10s image

then external-provisioner reports context deadline exceeded, it seems that timeout is 10s image

howerver, --timeout param seems not work

image

What you expected to happen:

external-provisioner GRPC response is normal

How to reproduce it:

driver creates volume cost time more than 10s

Anything else we need to know?:

Environment:

  • Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

ruan875417 avatar Dec 04 '24 09:12 ruan875417

What external-provisioner do you use? I can see that the latest v5.1.0 uses value of --timeout parameter for CreateVolume call.

With --timeout=20s and a CSI driver that never finishes CreateVolume call, I can see the call times out after 20s:

I1217 10:29:07.625495       1 connection.go:264] "GRPC call" method="/csi.v1.Controller/CreateVolume" request="{\"accessibility_requirements\":{\"preferred\":[{\"segments\":{\"topology.hostpath.csi/node\":\"192.168.122.186\"}}],\"requisite\":[{\"segments\":{\"topology.hostpath.csi/node\":\"192.168.122.186\"}}]},\"capacity_range\":{\"required_bytes\":1073741824},\"name\":\"pvc-1f75dddf-6b0c-444d-a150-b3db8e7df7b9\",\"volume_capabilities\":[{\"AccessType\":{\"Mount\":{}},\"access_mode\":{\"mode\":7}}]}"
I1217 10:29:27.625435       1 connection.go:270] "GRPC response" response="{}" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded"

And with --timeout=50s, I can see 50s delay:

I1217 10:31:50.988082       1 connection.go:264] "GRPC call" method="/csi.v1.Controller/CreateVolume" request="{\"accessibility_requirements\":{\"preferred\":[{\"segments\":{\"topology.hostpath.csi/node\":\"192.168.122.186\"}}],\"requisite\":[{\"segments\":{\"topology.hostpath.csi/node\":\"192.168.122.186\"}}]},\"capacity_range\":{\"required_bytes\":1073741824},\"name\":\"pvc-5923156e-a2bf-4f41-afc6-0ff224a74baf\",\"volume_capabilities\":[{\"AccessType\":{\"Mount\":{}},\"access_mode\":{\"mode\":7}}]}"
I1217 10:32:40.988504       1 connection.go:270] "GRPC response" response="{}" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded"

If you have the latest provisioner version, I would suggest to debug on your side, as I can't reproduce the issue.

jsafrane avatar Dec 17 '24 10:12 jsafrane

I can see the --timeout works correcly even with v5.0.1, so please triple check everything on your side.

jsafrane avatar Dec 17 '24 10:12 jsafrane

v5 and later , as it correctly respects the --timeout parameter Ensure the --timeout flag is properly passed and not overridden by another configuration.

Please cross verify

niranjandarshann avatar Mar 07 '25 05:03 niranjandarshann

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 05 '25 06:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 05 '25 06:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 04 '25 06:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 04 '25 06:08 k8s-ci-robot