external-dns
external-dns copied to clipboard
Error code 523
What happened: Cloudflare Error 523 with reachable services
What you expected to happen: The error is reachable through the browser
How to reproduce it (as minimally and precisely as possible):
- Set up Kubernetes Cluster with MetalLB which uses a floating IP and the Kubernetes NGINX ingress
- Deploy Kubernetes's bootcamp image (port 8080)
- Follow the instructions here (I used the RBAC enabled manifest)
- Use these settings in the manifest:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
args:
- --source=ingress # service and ingress is possible
# - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone create> # - --zone-id-filter=023e105f4ecef8ad9ca31a8372d0c353 # (optional) limit to a specific zone.
- --provider=cloudflare
- --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...)
- --cloudflare-dns-records-per-page=5000 # (optional) configure how many DNS records to fetch per request
- The ingress should point to the deployed kubernetes's bootcamp image which runs on port 8080
Anything else we need to know?:
The loadbalancer and deployment are confirmed working with:
curl -H 'Host: mydomain.net' http://141.95.x.x/
Environment:
- External-DNS version (use
external-dns --version): registry.k8s.io/external-dns/external-dns:v0.13.5 - DNS provider: Cloudflare
- Others: The cluster is running on a KVM VPS using Virtualizor. (Ubuntu 20.04 LTS x86_64)
To add, my current DNS records are:
| Type | Name | Content: | Proxy Status | TTL |
|---|---|---|---|---|
| A | mydomain.net | {LoadBalancerIP} | Proxied | Auto |
| TXT | mydomain.net | "heritage=external-dns,external-dns/owner=default,external-dns/resource=ingress/default/ingress" | DNS only | Auto |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.