external-dns
external-dns copied to clipboard
feat: pods can have hostPorts without hostNetwork
Description
Pods can have hostPorts without hostNetwork. I propose to remove the check that prevent checking annotations on pods that are not on hostNetwork.
Checklist
- [X] Unit tests updated
- [X] End user documentation updated
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: gregorycuellar / name: CUELLAR Grégory (f2cb5ed715b52b09bcc7d6119a59717d65ce48f6)
Welcome @gregorycuellar!
It looks like this is your first PR to kubernetes-sigs/external-dns 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/external-dns has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
This appears to be incorrect. In many CNIs, connections to non-host-network pods would need to go to the pod IP, not the node IP.
@johngmyers can you explain ?
What's the point of having an hostPort and send traffic to the Pod IP ? If you can send the traffic directly to the Pod IP, you don't need hostPorts.
Hostports and hostnetwork are two separated concepts, which ( as far as I know ) are not linked together. Also, on many cloud providers, you can only define hostports and not hostnetwork.
I don't know all CNIs so maybe there are exceptions, I don't know about.
In CNIs, such as the AWS VPC CNI, that don't use an overlay network you can send traffic to the pod IP.
This PR simply removes the check, so will create records for pods that don't have any hostPorts. It also ignores the ports.hostIP fields. So it will publish DNS records with incorrect or non-working IPs.
If you want to extend the pod source's support for non-host-network pods, you will likely need to handle them as a separate case.
I think, I understood the case you are mentioning.
For me, it was covered as you have the possibility to have internal hostname ( which point to pod IP ) and hostname ( to node IP ). Also, with the check in place, none are defined if there is not hostnetwork, so this use case is not working today.
Without the PR :
- with hostnetwork and no annotation -> nothing defined
- with hostnetwork and internal hostname annotation -> Pod IP
- with hostnetwork and hostname annotation -> Node IP
- without hostnetwork and no annotation -> nothing defined
- without hostnetwork and internal hostname annotation -> nothing defined
- without hostnetwork and hostname annotation -> nothing defined
- ports.hostIP is ignored
With the PR :
- with hostnetwork and no annotation -> nothing defined
- with hostnetwork and internal hostname annotation -> Pod IP
- with hostnetwork and hostname annotation -> Node IP
- without hostnetwork and no annotation -> nothing defined
- without hostnetwork and internal hostname annotation -> Pod IP
- without hostnetwork and hostname annotation -> Node IP
- ports.hostIP is ignored
I will try, to rework the PR, to have :
- with hostnetwork and no annotation -> nothing defined
- with hostnetwork and internal hostname annotation -> Pod IP
- with hostnetwork and hostname annotation -> Node IP
- without hostnetwork and no annotation -> nothing defined
- without hostnetwork and internal hostname annotation -> Pod IP
- without hostnetwork and hostname annotation -> Node IP if hostPort is defined, Pod IP otherwise
- ports.hostIP is used if defined
#3174 is similar
~~I have a branch https://github.com/johngmyers/external-dns/tree/pod-from-node which switches the source of internal IPs for pods from the podIP to the nodes. This is necessary for IPv6, when the pod network is single-stack but nodes are dual-stack. That pending change is likely to interact with extending the pod source to handle non-host-network pods.~~ (I withdraw this comment.)
I concur with the proposed semantics of May 7.
I believe if there are multiple ports it should use the union of all hostIPs if said union is non-empty.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign johngmyers for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/ok-to-test
@gregorycuellar Wdyt of #3174 ? Would this solve your issue ? If not, do you think you can rebase and fix tests ?
@gregorycuellar: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-external-dns-lint | d9000187049df7cac4c8a759774085dc530f726e | link | true | /test pull-external-dns-lint |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
@mloiseleur No, with 3174, it's still not possible to Node IP if hostNetwork is not defined. ( cf L103 )
PR has been rebased and tests fixed.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@mloiseleur do you think it can be merged or should it be abandoned ?
We are discussing this pod feature with the other maintainers. See the conversation started on the other PR.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/remove-lifecycle rotten
@jcralbino: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.