external-dns
external-dns copied to clipboard
How do I turn off apex domain updates (Azure)
Since upgrading to the latest (0.5.18) and removing a Lock on my DNS resource group all hell broke loose on my Azure DNS Zone as 5 different external-dns services in 5 kubernetes clusters all updated the Apex domain of my single DNS zone. How do I turn off this feature. I am happy for the service to create A records for all my services (e.g. service1.domain.co.uk) but not for the root @ A record of domain.co.uk. I want to stop the following happnening...
│ time="2020-01-15T15:50:00Z" level=info msg="Would update A record named '@' to 'x.x.x.x' for Azure DNS zone 'domain.co.uk'." │
│ time="2020-01-15T15:50:00Z" level=info msg="Would update TXT record named '@' to '\"heritage=external-dns,external-dns/owner=sandbox,external-dns/resource=service/sandbox/sandbox-ingress-nginx-ingress-controller\"' for Azure DNS zone 'domain.co.uk'." │
I can't see in the docs how to turn off this feature
My installed Helm version is Installed Helm version v2.15.2
I use TillerLess Helm to deploy
I am deploying the latest chart
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
external-dns 1 Wed Jan 15 19:02:08 2020 DEPLOYED external-dns-2.14.3 0.5.18 default
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-sboxagents-15189759-0 Ready agent 47m v1.15.7 10.240.0.5 <none> Ubuntu 16.04.6 LTS 4.15.0-1064-azure docker://3.0.8
aks-sboxagents-15189759-1 Ready agent 47m v1.15.7 10.240.0.4 <none> Ubuntu 16.04.6 LTS 4.15.0-1064-azure docker://3.0.8
My values.yaml file
domainFilters:
- domain.co.uk
logLevel: info
policy: upsert-only
provider: azure
registry: "txt"
sources:
- ingress
azure:
aadClientId: REDACTED
resourceGroup: REDACTED
subscriptionId: REDACTED
rbac:
create: true
My values that I set on the command line
'-f kubernetes/03-external-dns/values.yaml --set namespace=sandbox --set dryRun=true --set txtOwnerId=sandbox --set txtPrefix=sandbox --set azure.tenantId=REDACTED --set azure.aadClientSecret=REDACTED'
Why do I get this...? Would update A record named '@'
Can I turn this feature off. It didn't happen in 0.5.9 and I have tried to revert back to using that version but I then run into trouble with permission errors with /etc/kubernetes/azure.json
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
I take it no one can help?
/remove-lifecycle rotten
/triage support
I'm having a similar issue with Digital Ocean. It correctly creates an A record for service.customer.tld, but it also creates an A record for customer.tld which we don't want. I don't see anything in the config to stop it from doing that.
I am currently experiencing an issue where External DNS is updating an Azure DNS Apex record for which it does not own. We have tried to create a "false" heritage record to explicitly denote to External DNS that it does NOT own this record, yet the Apex record continues to be updated.
We are experiencing this issue currently as well. We have a number of entries in our Azure DNS Zone that are manually managed, this includes the Apex record which points to our production application entry point.
Anytime someone inadvertently/accidentally references the zone name in an Ingress rules[*].host,tls[*].hosts or Service annotation, External DNS overrides the Apex record, killing our production application.
He've found no mechanism to protect against this for the apex record.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle rotten
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/remove-lifecycle stale