external-dns
external-dns copied to clipboard
Add CRD - DNSEnpoint to the helm chart template
What would you like to be added: CRD DNSEnpoint include in the helm chart template Probably this was wrongly closed https://github.com/kubernetes-sigs/external-dns/blob/master/docs/contributing/crd-source.md
Why is this needed: Helps to ensure no separate CRD deployment is needed when using the helm
I think there have been a number of discussions in this repo regarding the stability of the CRD in the repo and the suitability of making it an official component of the Helm chart. I'd like an explicit approval that it can be included by a maintainer before approving adding it to the chart. The chart does already currently support the CRD workflow once the CRD is installed.
Remember that Helm shouldn't be used to install CRDs as it can't handle the lifecycle and the only reason to add the CRD is to allow easy prototyping and as a lookup for the CRD.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
I think it would be really helpful to have the CRD in the helm chart. Currently, I use the bitnami helm chart (https://github.com/bitnami/charts/tree/master/bitnami/external-dns) just for that reason. However, I would really prefer to use the official chart.
Remember that Helm shouldn't be used to install CRDs as it can't handle the lifecycle and the only reason to add the CRD is to allow easy prototyping and as a lookup for the CRD.
This is more of a problem with pre-release CRDs as if the CRD doesn't change not being able to update the CRD is a non-issue. AFAIK the CRD in question is pre-release and hasn't progressed recently, this might want addressing before we add the CRD to the chart and people start using Helm to install it.
There are two possibilities for the CRD:
- put the CRDs in the special
crdsdirectory and install them only once. Even if Helm cannot manage the lifecycle, many CICD tools (e.g. ArgoCD) can do this. - add the CRDs directly to the
templatesfolder and update them with helm (as is done in the bitnami helm chart). Since there is noDNSEndpointin the helm chart, it is not necessary to register the CRDs before applying other resources.
Nevertheless, both approaches are much more convenient than cloning the repo, generating the CRD and applying it by hand.
@snorwin option 2 is off the table as an anti-pattern, this is how Helm v2 hacked CRDs before the addition of support for the crds directory in v3. I'd be happy with option 1 if the CRD had a GA version as the inevitable misuse would have a lower impact, but until then I don't think the benefit outweighs the cost.
FYI the CRD is available at https://raw.githubusercontent.com/kubernetes-sigs/external-dns/master/docs/contributing/crd-source/crd-manifest.yaml so there is no need to clone and build the CRD.
@stevehipwell I totally agree, I also prefer option 1.
What is still missing for the CRD to become GA? Is there an issue tracking the missing features (i.e. error message in the status, support for TXT records, ...)?
I would be happy to contribute.
@snorwin I've not had much luck on the topic of the CRD when I've brought it up, I think this is because it's a contrib component and the maintainers here are snowed under. I'd suggest a new issue about making the CRD GA would be the best place to start, you might also want to look in on https://github.com/kubernetes-sigs/external-dns/issues/2529 and link the two together.
@stevehipwell I opened another issue to make the CRD GA: #2941
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale