external-dns
external-dns copied to clipboard
Consider ProviderSpecific when sorting Endpoints
Description
I had the master branch failing tests due to being on the wrong end of the unstable sorting:
$ go test sigs.k8s.io/external-dns/internal/testutils
--- FAIL: ExampleSameEndpoints (0.00s)
got:
abc.com 0 IN A test-set-1 1.2.3.4 []
abc.com 0 IN TXT something []
bbc.com 0 IN CNAME foo.com []
cbc.com 60 IN CNAME foo.com []
example.org 0 IN load-balancer.org [{foo bar}]
example.org 0 IN load-balancer.org []
example.org 0 IN TXT load-balancer.org []
want:
abc.com 0 IN A test-set-1 1.2.3.4 []
abc.com 0 IN TXT something []
bbc.com 0 IN CNAME foo.com []
cbc.com 60 IN CNAME foo.com []
example.org 0 IN load-balancer.org []
example.org 0 IN load-balancer.org [{foo bar}]
example.org 0 IN TXT load-balancer.org []
FAIL
FAIL sigs.k8s.io/external-dns/internal/testutils 0.003s
FAIL
There is a test that verifies Endpoints with ProviderSpecific is sorted after those that don't. Since we use unstable sorting, we need to consider ProviderSpecific to ensure intended behavior
Checklist
- [x] Unit tests updated (fixed anyway)
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: olemarkus
Once this PR has been reviewed and has the lgtm label, please assign szuecs for approval by writing /assign @szuecs in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/kind bug
@olemarkus: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.