kpng
kpng copied to clipboard
figure out how to test clusterTraifficPolicy=cluster vs local, in KPNG
Looking at https://github.com/kubernetes-sigs/kpng/pull/287/
possibly add new tests upstream (or see if theyre there already)
-
Talking to @mcluseau , he's trying to support the ability of the kpng proxy to make sure that it only uses local endpoints when "clusterTrafficPolicy=local".
-
to test this, we can make a service w/ externalIP or NodePort to pull a servieEndpoint=local, where EACH ENDPOINT serves its nodename ... and you NEVER get more then one value. This means theres zero node bounce forwarding occuring
-
similar to https://github.com/kubernetes/kubernetes/pull/110967 , we might need a new upstream e2e not sure yet
-
lets also ask the question, generally to upstream k8s sig
Services [It] should preserve source pod IP for traffic thru service cluster
should we perserve source addresses for NODEPORTs
... when externalTrafficPolicy=local ????
( DONT DO THIS but, its just a note, theres another way test this..... ( setup a cluster where nodes cant forward traffic somehow , and see connection failures when using ) )
i.e. "services should not source NAT unnecessarily" - mc
We should be able to test this with an extra "remote" container
@astoycos yes, I was thinking of a daemonset returning its nodeName ie over HTTP
@mcluseau I can take a look at this :) we do something similar in ovn-kubernetes
/assign
Opening this up for folks
/unassign
I can still help here though please reach out if you need help!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi @astoycos , can you help/guide me on with this issue? Where should I start from?
Hey @emabota That'd be great, thanks for the help
I think what we can do is
- Spin up an external client container
- Make a Nodeport Service and set ETP=local
- Make sure the External client container -> Nodeport service ends up on the correct node
For the backend pod using https://pkg.go.dev/k8s.io/kubernetes/test/images/agnhost/netexec often is super helpful here since it has a /hostname
endpoint
/assign
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.