external-dns
external-dns copied to clipboard
Endpoints with templated prefix are not deleted
What happened:
When using templated prefix (--txt-prefix="prefix-%{record_type}."
), endpoints are not delete when the ingress is deleted.
What you expected to happen:
Endpoints of deleted ingresses are deleted.
How to reproduce it (as minimally and precisely as possible):
Create a zone in whatever provider. This issue doesn't depend on the provider. Verified with google and designate.
> kubectl apply -f ingress.yaml
> go run main.go --txt-prefix="prefix-%{record_type}." --registry txt --txt-owner-id="chris" --namespace=default --provider=google --source=ingress --kubeconfig=$KUBECONFIG --log-level=debug --google-project external-dns-testing
INFO[0060] Change zone: cloud-example-com batch #0
INFO[0060] Add records: my-app.cloud.example.com. A [155.53.119.149] 300
INFO[0060] Add records: prefix-.my-app.cloud.example.com. TXT ["heritage=external-dns,external-dns/owner=chris,external-dns/resource=ingress/default/nginx"] 300
INFO[0060] Add records: prefix-a.my-app.cloud.example.com. TXT ["heritage=external-dns,external-dns/owner=chris,external-dns/resource=ingress/default/nginx"] 300
> kubectl delete -f ingress.yaml
DEBU[0121] Matching zones against domain filters: []
DEBU[0121] Matched cloud.example.com. (zone: cloud-example-com) (visibility: public)
DEBU[0121] Considering zone: cloud-example-com (domain: cloud.example.com.)
DEBU[0121] Skipping endpoint my-app.cloud.example.com 300 IN A 155.53.119.149 [] because owner id does not match, found: "", required: "chris"
INFO[0121] All records are already up to date
From the output you can see:
-
my-app.cloud.example.com
is created alongside the needed txt records - Alfter deleting the ingress,
external-dns
skippsmy-app.cloud.example.com
because the owner label is missing
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version
): master - DNS provider: verified with google and designate
- Others:
/assign @chrigl
It also happens when removing a record from the annotation in a Service
or when changing the external IP. Probably anytime external-dns needs to delete/change an existing record. We're affected by this bug.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Not stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Not stale
It seems to be the case even when using a suffix without templating.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Can somebody disable triage on this issue? This is a serious issue which absolutely must be fixed.
/remove-lifecycle rotten
This issue was actually fixed by #3724, even though it was not mentioned there. This issue can now be closed.
Correction: while it works for A records now, for TXT records created via the DNSEndpoint API (--source=crd --crd-source-apiversion=externaldns.k8s.io/v1alpha1 --crd-source-kind=DNSEndpoint --managed-record-types=A --managed-record-types=TXT --registry=txt --txt-owner-id=kone --txt-prefix=_heritage_%{record_type}.
), the issue still exists:
DNS state:
_heritage_txt.test.hasler.dev 300 IN TXT "heritage=external-dns,external-dns/owner=kone,external-dns/resource=crd/mail-msa/hasler-dev-test"
test.hasler.dev 300 IN TXT "test"
external-dns log:
time="2024-03-27T17:36:29Z" level=debug msg="Skipping endpoint test.hasler.dev 300 IN TXT \"test\" [] because owner id does not match, found: \"\", required: \"kone\""
With exactly the same configuration, A records (also when created via the DNSEndpoint API) can be added and deleted just fine. But for TXT records I get this error.
external-dns version: 0.14.1
Edit: The problem with TXT records is not related to templated prefix and therefore not related to this issue, so this issue can indeed be closed. I addressed above-mentioned problem with TXT records in #4342.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale