apiserver-network-proxy icon indicating copy to clipboard operation
apiserver-network-proxy copied to clipboard

Issues with CreateSingleUseGrpcTunnel

Open ash2k opened this issue 2 years ago • 5 comments

  1. CreateSingleUseGrpcTunnel() does not allow to use different contexts for dialing and connection lifetime control. There seem to be no way to shut the tunnel down.
  2. Context cancellation/timeout in grpcTunnel.DialContext() doesn't interrupt any I/O, doesn't signal to the serve() goroutine in any way. This is suspicious and is possibly a source of connection leaks.

See this thread and PR for background https://github.com/kubernetes/kubernetes/pull/110079#discussion_r874800421.

ash2k avatar May 24 '22 01:05 ash2k

/assign @cheftako @jkh52

this seems important to resolve

liggitt avatar May 24 '22 14:05 liggitt

/reopen (not finished)

jkh52 avatar Jun 21 '22 00:06 jkh52

/open

cheftako avatar Jun 24 '22 20:06 cheftako

/reopen

cheftako avatar Jun 24 '22 20:06 cheftako

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 22 '22 20:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 22 '22 21:10 k8s-triage-robot

/remove-lifecycle rotten

jkh52 avatar Oct 31 '22 19:10 jkh52

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 29 '23 19:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 28 '23 20:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 30 '23 20:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 30 '23 20:03 k8s-ci-robot

This is marked rotten/stale, but I would argue that at this point it is fixed.

CreateSingleUseGrpcTunnel() does not allow to use different contexts for dialing and connection lifetime control. There seem to be no way to shut the tunnel down.

This concern is still valid, but because of kube-apiserver connection re-use, there isn't an obvious way to close the single-use tunnel until the client calls Close on the net.Conn (actually a konnectivity client.conn).

Context cancellation/timeout in grpcTunnel.DialContext() doesn't interrupt any I/O, doesn't signal to the serve() goroutine in any way. This is suspicious and is possibly a source of connection leaks.

This was fixed by https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/360.

jkh52 avatar Mar 30 '23 21:03 jkh52