Port Forward - Different Ports local port and remote port
I would like to be able to spin up multiple port forwards at the same time if possible. E.g. Using the kubectl CLI command form of kubectl --context <some-context> port-forward -n <some-namespace> <some-pod> <local-port>:<remote-port>
kubectl --context cluster-1 port-forward -n n1 pod 9200:9200 &
kubectl --context cluster-2 port-forward -n n1 pod 9100:9200 &
Currently I do not see a way of doing this AFAIK.
I get an error that the PORTS must be the same when creating the port forwarded socket.
pf = portforward(
dispatch["k8s"].connect_get_namespaced_pod_portforward,
"...",
"...",
_request_timeout=60,
ports="some-remote-port",
)
return pf.socket(some-local-port)
Am I missing something?
/assign @iciclespider
@roycaihw: GitHub didn't allow me to assign the following users: iciclespider.
Note that only kubernetes-client members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @iciclespider
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@iciclespider Could you take a look? Thanks
If this is essentially what you are trying to recreate:
kubectl --context cluster-1 port-forward -n n1 pod 9200:9200 &
kubectl --context cluster-2 port-forward -n n1 pod 9100:9200 &
This is roughly how it will be accomplished:
cluster_1_pf = portforward(
dispatch["cluster-1"].connect_get_namespaced_pod_portforward,
"...",
"...",
_request_timeout=60,
ports="9200",
)
cluster_2_pf = portforward(
dispatch["cluster-2"].connect_get_namespaced_pod_portforward,
"...",
"...",
_request_timeout=60,
ports="9200",
)
cluster_1_port_9200 = cluster_1_pf.socket(9200)
cluster_2_port_9200 = cluster_2_pf.socket(9200)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.