external-dns
external-dns copied to clipboard
v0.13.1 / infoblox / zone fetch and create conflict issue(s)
What happened:
An upgrade was performed to 0.13.1 after experiencing the "create/delete thrash" known issue for this provider.
After performing the upgrade, this version failed several post deployment verification tests (of mine). In a deeper look at the information, there are possibly two further regressions against the infoblox provider. One might actually be with the infoblox-go-client but I have not had a chance to stand up the lower level test harness.
I expect to be able to help out and triage further, but am creating the issue for tracking/visibility, and the possibility that I could be missing something.
-
A valid Zone ID filter is not respected at infoblox.go#L190. I am seeing this call returning tens of thousands of A records across what seems to be every one of my zones.
- Verified that the zone filter is accurate by providing and invalid filter that matches 0 zones
- logs ="Ignoring changes to '...' because a suitable Infoblox DNS zone was not found."
- If any zone is matched, including an empty zone with no characters in common to others that exist, all records are being returned and processed for changes spanning completely separate zones.
- Verified that the zone filter is accurate by providing and invalid filter that matches 0 zones
-
Possibly correlated, this version is trying to recreate existing records on every iteration including those I purge manually and allow it to create itself. After the first iteration where they are created, the second iteration will attempt create and receive http/400 on each since they exist. The logs seem to indicate multiple attempts at creating the same record within msec of each other as well.
What you expected to happen:
- Existing owned records to be skipped.
- Matched zones to be processed and not the entire grid instance.
How to reproduce it (as minimally and precisely as possible):
- Install the listed version, and configure for infoblox.
- registry=txt
- txt-owner-id yes
- txt-prefix yes
- sources: ["service", "ingress", "contour-httpproxy"]
- infoblox-view default
- domain-filter yes
- zone-id-filter yes
Anything else we need to know?:
Environment:
- External-DNS version (use
external-dns --version):- 0.13.1
- DNS provider:
- infoblox
- Others:
- have tried to manually set wapiVersion without change in outcome
A longer delay than expected but picking this back up and should have an update in the near future.
Thanks @eastwood-c ! We are facing this issue too.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
As far as I'm aware this is still a significant issue with Infoblox
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.