external-dns
external-dns copied to clipboard
Error dns: bad authentication
What happened: I am getting error below with new edns release
RFC2136 had errors in one or more of its batches: [dns: bad authentication]"
What you expected to happen:
DNS record created successfully.
How to reproduce it (as minimally and precisely as possible):
Deploy latest release generates the error.
Anything else we need to know?:
I am using bitnami chart for edns
Environment:
-
External-DNS version (use
external-dns --version
): bitnami chart 7.1.0 (App 0.14.1) doesn't work , chart 6.38.0 (App 0.14.0) works. -
DNS provider: RFC2136
-
Others: k8s version 1.29.3
time="2024-03-25T20:31:00Z" level=warning msg="No available zone found for ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-25T20:31:00Z" level=debug msg="AddRecord.ep=ingress-testing-mudpit2.k8s.tesdomain.com 300 IN A 10.98.0.31 []"
time="2024-03-25T20:31:00Z" level=info msg="Adding RR: ingress-testing-mudpit2.k8s.tesdomain.com 300 A 10.98.0.31"
time="2024-03-25T20:31:00Z" level=warning msg="No available zone found for ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-25T20:31:00Z" level=debug msg="AddRecord.ep=ingress-testing-mudpit2.k8s.tesdomain.com 0 IN TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\" []"
time="2024-03-25T20:31:00Z" level=info msg="Adding RR: ingress-testing-mudpit2.k8s.tesdomain.com 300 TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\""
time="2024-03-25T20:31:00Z" level=warning msg="No available zone found for a-ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-25T20:31:00Z" level=debug msg="AddRecord.ep=a-ingress-testing-mudpit2.k8s.tesdomain.com 0 IN TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\" []"
time="2024-03-25T20:31:00Z" level=info msg="Adding RR: a-ingress-testing-mudpit2.k8s.tesdomain.com 300 TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\""
time="2024-03-25T20:31:00Z" level=debug msg=SendMessage
time="2024-03-25T20:31:00Z" level=info msg="error in dns.Client.Exchange: dns: bad authentication"
time="2024-03-25T20:31:00Z" level=error msg="RFC2136 create record failed: dns: bad authentication"
time="2024-03-25T20:31:00Z" level=fatal msg="Failed to do run once: RFC2136 had errors in one or more of its batches: [dns: bad authentication]"
I can confirm this happens (same versions) here too. Interesting points:
2 deployments with separate zones and zone filters. I suspect it is related to #3976 and followup changes.
I tried setting rfc2136-batch-change-size=1
without success.
time="2024-03-26T13:00:37Z" level=info msg="Instantiating new Kubernetes client"
time="2024-03-26T13:00:37Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-03-26T13:00:37Z" level=info msg="Created Kubernetes client https://10.43.0.1:443"
time="2024-03-26T13:00:37Z" level=info msg="Configured RFC2136 with zone '[k8s.tesdomain.com.]' and nameserver 'x.x.x.x:53'"
time="2024-03-26T13:00:37Z" level=warning msg="No available zone found for ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-26T13:00:37Z" level=info msg="Adding RR: ingress-testing-mudpit2.k8s.tesdomain.com 300 A 10.98.0.31"
time="2024-03-26T13:00:37Z" level=info msg="error in dns.Client.Exchange: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=error msg="RFC2136 create record failed: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=warning msg="No available zone found for ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-26T13:00:37Z" level=info msg="Adding RR: ingress-testing-mudpit2.k8s.tesdomain.com 300 TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\""
time="2024-03-26T13:00:37Z" level=info msg="error in dns.Client.Exchange: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=error msg="RFC2136 create record failed: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=warning msg="No available zone found for a-ingress-testing-mudpit2.k8s.tesdomain.com, set it to 'root'"
time="2024-03-26T13:00:37Z" level=info msg="Adding RR: a-ingress-testing-mudpit2.k8s.tesdomain.com 300 TXT \"heritage=external-dns,external-dns/owner=mudpit2,external-dns/resource=ingress/ingress-testing/testsite-ingress\""
time="2024-03-26T13:00:37Z" level=info msg="error in dns.Client.Exchange: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=error msg="RFC2136 create record failed: dns: bad authentication"
time="2024-03-26T13:00:37Z" level=fatal msg="Failed to do run once: RFC2136 had errors in one or more of its batches: [dns: bad authentication dns: bad authentication dns: bad authentication]"
I guess the problems stems from these commits: https://github.com/kubernetes-sigs/external-dns/pull/3976/commits/714078dc95db9514e4613502f42f7efe7c0db10e and https://github.com/kubernetes-sigs/external-dns/pull/3976/commits/714078dc95db9514e4613502f42f7efe7c0db10e
The field name changes and that change is likely not correctly reflected in the bitnami chart.
I think this issues indicates that the release changelog is not highlighting such changes as breaking.
It only shows the pull request title RFC2136: Allow multiple zones
, without mentioning that this change also changes an argument name.
Further inspection indicates that it is a problem that the zone matching does not work correctly anymore. For now staying on 0.14.0 is working.
See https://github.com/kubernetes-sigs/external-dns/pull/3976 which introduced the multiple-zone handling. It seems like handling the trailing dot at the end of the zone got broken by the reodering of the dns.fqdn calls. Zone now need to be configured without the trailing dot.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.