external-dns
external-dns copied to clipboard
triggerLoopOnEvent: true / event mode doesnt seem to work for crds
What happened:
External dns pod set up (loop on event set, interval 30m)
- --metrics-address=:7979
- --log-level=info
- --log-format=json
- --events
- --domain-filter=blahblah.net
- --domain-filter=blahblah.dev
- --policy=sync
- --provider=azure
- --registry=txt
- --interval=30m
- --txt-owner-id=sandbox-west-4
- --txt-prefix=_
- --source=crd
- --source=service
- --source=ingress
If I create ingresses I can see in the external-dns logs it springs into life with in 30s to 1minute When I create a CRD like this:
apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: testcrdrecord spec: endpoints:
- dnsName: testcrd.blahblah.dev
recordTTL: 180
recordType: A
targets:
-
I have to wait up to the reconciliation window of 30mins (10:20 to 10:50)
11/03/2022_10:20:03-(⎈ |sandbox-west-4-admin:default)-dns-external-regex/crd➜ crd k apply -f crd.yaml dnsendpoint.externaldns.k8s.io/testcrdrecord created
external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating A record named 'testcrd' to '' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:19:13Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating A record named 'test1' to '' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:19:14Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating TXT record named '_testcrd' to '"heritage=external-dns,external-dns/owner=sandbox-west-4,external-dns/resource=crd/default/davecrdrecord"' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:19:15Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating TXT record named '_test1' to '"heritage=external-dns,external-dns/owner=sandbox-west-4,external-dns/resource=ingress/platform-monitoring/test1"' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:19:16Z"}
external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Deleting A record named 'test1' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:19:49Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Deleting TXT record named '_test1' for Azure DNS zone 'blahblah.dev.","time":"2022-03-11T10:19:50Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Deleting A record named 'testcrd' for Azure DNS zone 'blahblah.dev.","time":"2022-03-11T10:20:44Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Deleting TXT record named '_testcrd' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:20:45Z"}
external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating A record named 'testcrd' to '*********' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:50:16Z"} external-dns-5797dd8c6c-hhftw external-dns {"level":"info","msg":"Updating TXT record named '_testcrd' to '"heritage=external-dns,external-dns/owner=sandbox-west-4,external-dns/resource=crd/default/testcrdrecord"' for Azure DNS zone 'blahblah.dev'.","time":"2022-03-11T10:50:17Z"}
However, if I create an ingress and crd at the same time then the crd is processed immediately. Same on deletion which you can see in the above at 10:19 and 10:20
What you expected to happen:
Id naturally expect that the loop on event flag would work for both ingress, service and crd.
How to reproduce it (as minimally and precisely as possible):
see above
Anything else we need to know?:
I dont think so.
Environment:
-
External-DNS version (use
external-dns --version
):v0.10.2 helm : https://charts.bitnami.com/bitnami version: 6.1.4
-
DNS provider: Azure
-
Others:
@DaveMullin see https://github.com/kubernetes-sigs/external-dns/pull/2220
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.