external-dns
external-dns copied to clipboard
Removing duplicate endpoint
What happened: It's happened when I use tls in spec. Flag '--ignore-ingress-tls-spec' doesn't help me.
spec:
ingressClassName: nginx
tls:
- hosts:
- alertmanager.domain.dom
secretName: alertmanager
I got messages in logs messages about duplications and I took one this
2023-11-14T12:16:17+03:00 time="2023-11-14T09:16:17Z" level=debug msg="Endpoints generated from ingress: observability/prometheus-kube-prometheus-alertmanager: [alertmanager.domain.dom 0 IN A 1.1.1.1 [] alertmanager.domain.dom 0 IN A 1.1.1.1 []]"
2023-11-14T12:16:17+03:00 time="2023-11-14T09:16:17Z" level=debug msg="Removing duplicate endpoint alertmanager.domain.dom 0 IN A 1.1.1.1 []"
2023-11-14T12:16:17+03:00 time="2023-11-14T09:16:17Z" level=info msg="All records are already up to date"
What you expected to happen: I Expect in message
2023-11-14T12:16:17+03:00 time="2023-11-14T09:16:17Z" level=debug msg="Endpoints generated from ingress: observability/prometheus-kube-prometheus-alertmanager: [alertmanager.domain.dom 0 IN A 1.1.1.1 [] alertmanager.domain.dom 0 IN A 1.1.1.1 []]"
only one time
[alertmanager.domain.dom 0 IN A 1.1.1.1 []
and absent message included
msg="Removing duplicate endpoint
Anything else we need to know?:
My ingress looks
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-alertmanager
namespace: observability
uid: fabf020d-afd7-47dc-a952-e38469c4e993
resourceVersion: '285337624'
generation: 1
creationTimestamp: '2022-10-18T13:39:11Z'
labels:
app: kube-prometheus-stack-alertmanager
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 51.2.0
chart: kube-prometheus-stack-51.2.0
heritage: Helm
release: prometheus
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: observability
nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: prometheus-basic-auth-secret
nginx.ingress.kubernetes.io/auth-type: basic
...
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2023-09-29T14:49:26Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
status:
loadBalancer:
ingress:
- ip: 1.1.1.1
spec:
ingressClassName: nginx
tls:
- hosts:
- alertmanager.domain.dom
secretName: alertmanager.domain.dom
rules:
- host: alertmanager.domain.dom
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: prometheus-kube-prometheus-alertmanager
port:
number: 9093
Environment:
- External-DNS version v0.14.0
- DNS provider: Google Cloud DNS
- Others: I checked my dns logs and I saw that ExtDns did nothing with Cloud DNS at time with logs about duplications.
same here, however the flag --ignore-ingress-tls-spec helped only to avoid duplicate creation. But still, there's a constant loop of Del/Add. I even set a unique external-dns/owner attribute to avoid clashing between similar zones
{"level":"debug","msg":"Matching zones against domain filters: {[poc.my.domain.com] [] \u003cnil\u003e \u003cnil\u003e}","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Matched poc.my.domain.com. (zone: poc-my-domain-com) (visibility: public)","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Filtered my.domain.com. (zone: my-domain-com) (visibility: public)","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Considering zone: poc-my-domain-com (domain: poc.my.domain.com.)","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Endpoints generated from ingress: poc/myapplication: [poc.my.domain.com 0 IN A 34.107.27.167 []]","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Matching zones against domain filters: {[poc.my.domain.com] [] \u003cnil\u003e \u003cnil\u003e}","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Matched poc.my.domain.com. (zone: poc-my-domain-com) (visibility: public)","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Filtered my.domain.com. (zone: my-domain-com) (visibility: public)","time":"2023-12-15T20:07:18Z"}
{"level":"debug","msg":"Considering zone: poc-my-domain-com (domain: poc.my.domain.com.)","time":"2023-12-15T20:07:18Z"}
{"level":"info","msg":"Change zone: poc-my-domain-com batch #0","time":"2023-12-15T20:07:18Z"}
{"level":"info","msg":"Del records: poc.my.domain.com. A [34.107.27.167] 300","time":"2023-12-15T20:07:18Z"}
{"level":"info","msg":"Del records: poc.my.domain.com. TXT [\"heritage=external-dns,external-dns/owner=external-dns-poc.my.domain.com.,external-dns/resource=ingress/poc/myapplication\"] 300","time":"2023-12-15T20:07:18Z"}
{"level":"info","msg":"Add records: poc.my.domain.com. A [34.107.27.167] 300","time":"2023-12-15T20:07:18Z"}
{"level":"info","msg":"Add records: poc.my.domain.com. TXT [\"heritage=external-dns,external-dns/owner=external-dns-poc.my.domain.com.,external-dns/resource=ingress/poc/myapplication\"] 300","time":"2023-12-15T20:07:18Z"}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.