external-dns
external-dns copied to clipboard
Cloudflare A Updates both (private and public ip)
What happened: configured external-dns with Cloudflare token
configured a service with this annotations
external-dns.alpha.kubernetes.io/access: public
external-dns.alpha.kubernetes.io/cloudflare-proxied: 'false'
external-dns.alpha.kubernetes.io/hostname: vpn.test.oc4.be1.io
What you expected to happen: only the public ip of the service should be configured in the dns; it creates two A records (one with the private address and one with the public)
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
-
External-DNS version (use
external-dns --version
): v0.10.2 -
DNS provider: cloudflare
-
Others:
#2171 #2450
to overcome this issue I added this annotations so the load balancer only gets one ip; but this is only a workaround for those struggling like me; who can help for a final solution?
external-dns.alpha.kubernetes.io/access: public
external-dns.alpha.kubernetes.io/cloudflare-proxied: 'false'
external-dns.alpha.kubernetes.io/hostname:xxxx
load-balancer.hetzner.cloud/disable-private-ingress: 'true'
load-balancer.hetzner.cloud/use-private-ip: 'true'
load-balancer.hetzner.cloud/ipv6-disabled: 'true'
load-balancer.hetzner.cloud/name: k8s-apps
Bumping this, as I have the same issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.