external-dns
external-dns copied to clipboard
Crd registry implementation
Description Introduce a new registry: A native custom resource based registry. This new registry doesn't need to rely on external sources to maintain its state as the states is kept completely inside the Kubernetes cluster.
This new registry works like any other registry, it will manages itself and keep track of any dns records that need to be created (no matter the source).
The CRD name is currently DNSEntry but this is subject to change. We're considering DNSRegistryEntry now but @Raffo has requested input from others. As soon as a name is finalized, I will update this PR to reflect the decision.
I've tested it with Cloudflare using two different sources: service annotations and CRD (DNSEndpoint).
Changes include:
- New Custom Resource Definition with associated cluster role updates
- Refresh on the kubebuilder toolking (and added
makesubcommands to make the generation of said CRDs easier) - Fully implemented a new Registry named
crdthat manages the underlying custom resource (DNSEntry) - Updated the support of
managed-record-typesin Helm and fixed some area that didn't support this feature correctly - Created a k8s interface to make it easier to create tests for CRD, this might cause some contention, I'm open to discuss this.
Fixes #4575
Checklist
- [x] Unit tests updated
- [ ] End user documentation updated
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign mloiseleur for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @pier-oliviert. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
It was mentioned on Slack that there's no documentation updates for this PR. I agree and it's something I'd like to add but I'll wait for the review of the code before spending time on the documentation.
@mloiseleur @mcharriere @Raffo I know this is a big PR, but a lot of it is boilerplate code from auto-generated code and tests.
Since you three were interested to see a PR, I'd love to hear from you 3. All the changes are gated so there isn't any impact to anyone using external-dns. The main change I believe lives in crd/registry.go which is a copy of the TXTRegistry with modification. Although it has around 500 LOC, I think the code is manageable and somewhat straightforward.
I kept the DNSEntry custom resource as lightweight as possible. The structure is identical to DNSEndpoint for now. That should help reviewing this PR too.
I'm opened to break up this PR into 4 pieces (~1 PR per commit) but all the changes made here is to accommodate the new CRD. Let me know what you think. And thank you again.
Thanks for opening a PR so that we can see what you had in mind. I think we should use this just as place to understand the consequences and evaluate tradeoffs rather than a full PR too review: as it is, it is definitely too big and hard to review, I'd rather have it split in parts. We could have one that adds the CRD doing nothing, one that implements the registry and so on. I think I'd love to settle the naming discussion that we have in the issue https://github.com/kubernetes-sigs/external-dns/issues/4575 before anything else and just keep this as reference.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@pier-oliviert thank you for your huge amount of work.
I want just to ask if its hard to not to add possibility to have a better understanding of what records have been created or not. Like, in case of validation of DNS recods creation on ArgoCD we need to use a spearate container which will go and check it in DNS, instead of checkign state in CRD status itself. Right now there just a timestamp, which gives no understadning.
@simonoff I appreciate the kind word. Unfortunately, due to the lack of updates on this PR, I had no choice but to create my own operator that manages DNS: Phonebook
I'm actively working on it, and I am usually pretty quick to respond.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.