kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Port-forward drops connection to pod after first connection

Open zanettea opened this issue 3 years ago • 11 comments

Hi @eddiezane , @brianpursley the issue #1169, despite being marked as completed, is not completed at all and is breaking many tools with major impacts. Personally I can't connect anymore to postgresql nor I can debug nodejs/java apps via vscode. Currently I am forced to used v1.22 since all subsequent versions are still broken (1.23, 1.24, 1.25, 1.26). I can't believe such an impacting issue have not received a fix in all this time. Hope to see a fix soon.

Thanks

zanettea avatar Feb 02 '23 19:02 zanettea

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 02 '23 19:02 k8s-ci-robot

Comments directly on issue #1169 are the best way forward; if that's not getting traction, see https://github.com/kubernetes/community/blob/master/sig-cli/README.md to find out how to participate in the CLI SIG.

sftim avatar Feb 07 '23 00:02 sftim

/retitle Port-forward drops connection to pod after first connection /triage duplicate

sftim avatar Feb 07 '23 00:02 sftim

I am experiencing the same issue, the connection just drops randomly after the first connection was successful.

Forwarding from 127.0.0.1:3310 -> 3306
Forwarding from [::1]:3310 -> 3306
Handling connection for 3310
E0221 12:39:24.471089   93743 portforward.go:406] an error occurred forwarding 3310 -> 3306: error forwarding port 3306 to pod xxxxxxxxxx, uid : failed to execute portforward in network namespace "/var/run/netns/cni-c8c76b9d-cf06-e7d9-c527-4df5ff50483a": read tcp4 127.0.0.1:38850->127.0.0.1:3306: read: connection reset by peer
E0221 12:39:24.471350   93743 portforward.go:234] lost connection to pod

kubectl v1.24.0 MacOS Ventura 13.2

kaiffeetasse avatar Feb 21 '23 11:02 kaiffeetasse

I'm getting this as well MacOS Ventura 13.2.1 + DataGrip (on a client side)

server side: Debian 11 rancher k3s v1.26.2+k3s1 cnpg 1.19.1

yuryskaletskiy avatar Mar 20 '23 22:03 yuryskaletskiy

For me it's also an issue - not sure if it's after first connection (I can run multiple curls to the forwarded port) but definitely can say port forwarding works for a short period of time (few minutes at most) and then I get connection refused after

mateuszdrab avatar Apr 17 '23 10:04 mateuszdrab

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 16 '23 11:07 k8s-triage-robot

/remove-lifecycle stale

mateuszdrab avatar Jul 16 '23 11:07 mateuszdrab

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 24 '24 15:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 23 '24 15:02 k8s-triage-robot