python icon indicating copy to clipboard operation
python copied to clipboard

Port Forward - Different Ports local port and remote port

Open razilevin opened this issue 2 years ago • 5 comments

I would like to be able to spin up multiple port forwards at the same time if possible. E.g. Using the kubectl CLI command form of kubectl --context <some-context> port-forward -n <some-namespace> <some-pod> <local-port>:<remote-port>

kubectl --context cluster-1 port-forward -n n1 pod 9200:9200 &
kubectl --context cluster-2 port-forward -n n1 pod 9100:9200 &

Currently I do not see a way of doing this AFAIK.

I get an error that the PORTS must be the same when creating the port forwarded socket.

 pf = portforward(
    dispatch["k8s"].connect_get_namespaced_pod_portforward,
    "...",
    "...",
    _request_timeout=60,
    ports="some-remote-port",
)
return pf.socket(some-local-port)

Am I missing something?

razilevin avatar Oct 31 '23 21:10 razilevin

/assign @iciclespider

roycaihw avatar Nov 06 '23 17:11 roycaihw

@roycaihw: GitHub didn't allow me to assign the following users: iciclespider.

Note that only kubernetes-client members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

/assign @iciclespider

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 06 '23 17:11 k8s-ci-robot

@iciclespider Could you take a look? Thanks

roycaihw avatar Nov 06 '23 17:11 roycaihw

If this is essentially what you are trying to recreate:

kubectl --context cluster-1 port-forward -n n1 pod 9200:9200 &
kubectl --context cluster-2 port-forward -n n1 pod 9100:9200 &

This is roughly how it will be accomplished:

cluster_1_pf = portforward(
    dispatch["cluster-1"].connect_get_namespaced_pod_portforward,
    "...",
    "...",
    _request_timeout=60,
    ports="9200",
)
cluster_2_pf = portforward(
    dispatch["cluster-2"].connect_get_namespaced_pod_portforward,
    "...",
    "...",
    _request_timeout=60,
    ports="9200",
)
cluster_1_port_9200 = cluster_1_pf.socket(9200)
cluster_2_port_9200 = cluster_2_pf.socket(9200)

iciclespider avatar Nov 06 '23 19:11 iciclespider

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 04 '24 20:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 05 '24 20:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 04 '24 21:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 04 '24 21:04 k8s-ci-robot