external-dns
external-dns copied to clipboard
v0.12.0: Improve migration process for txt-prefix/suffix users
v0.12.0 will introduce the new TXT record format which includes the record type in the name: https://github.com/kubernetes-sigs/external-dns/pull/2157
But for people currently using --txt-prefix, --txt-suffix, migrating to v0.12.0 will be a little annoying.
Currently, using CNAMEs is not possible by default, so you have to add a prefix or suffix to avoid the TXT and CNAME records conflicting. That use-case for the prefix/suffix can be replaced with the new record format.
But once you are using a prefix/suffix, upgrading to the v0.12.0 format cannot be done automatically. If you update the prefix/suffix to include %{record_type} or just remove the prefix/suffix, external-dns will ignore existing records until you create the updated TXT records manually. That's the annoying part I want us to fix.
What would you like to be added:
There are some options for improving the migration process:
- Add a new CLI option
--old-record-format='myprefix-%{record}'. With the existing flag--once, External-dns will perform a one-time migration of old records to and from any format you want.- Most flexible, most amount of work
- Add a new CLI option called something like
--ignore-prefix/suffix-in-new-record-format. This will not change how external-dns creates the "old" record format (that still includes the prefix/suffix), but the "new" record format will be created by ignoring the prefix/suffix. That would automatically migrate users away from needing prefix/suffix.- Less flexible (since it only applies to my use-case where I am only using the prefix/suffix to get CNAMEs working), and less work
Thoughts?
Why generate error (and rollback DNS update transaction) if ${prefix}a-${domain} isn't found ? I don't see cases in which is it supposed to be an error for missing such entry ?
I guess option 1. seems to be the way to go IMO. @njuettner @Raffo do you have an opinion on that? Anyone else has an opinion on that?
Option 1 seems the most versatile option and will definitely suit more scenarios where users want to migrate TXT formats (e.g. adding a txt prefix/suffix where one previously didn't exist, or changing the existing format).
For our case we want to migrate from one txt-prefix to a different txt-prefix format however would the goal be to support all migration formats?
No prefix/suffix -> prefix/suffix
Prefix/suffic -> No prefix/suffix
Prefix -> Suffix
Suffix -> Prefix
Prefix -> Different prefix
Suffix -> Different suffix
There's also the argument that such migration flags should also support migrating the owner ID too, or would that be a separate concern?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign @Raffo
I like the idea @Raffo and want to see your opinion.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale