gateway-api
gateway-api copied to clipboard
conformance: refactor listenersMatch and parentsMatch helpers into a single generic function
What would you like to be added:
Refactor listenersMatch (added in #1081) and parentsMatch to be a single generic function, or at least wrappers around a single generic implementation of the core logic.
Why this is needed:
The core logic of these two functions is quite similar (searching for all expected items in a slice of actual items, including the unimplemented need to support an arbitrary ordering), and it would be better to have this logic implemented once than risk two similar implementations diverging.
Hi,
It looks like this refactoring could be achieved with the same level of log details by using Reflection. Would that be the way to go ? I'd like to give it a try.
Regards
@FlorentFlament that sounds like it could work, thanks for volunteering!
/assign @FlorentFlament
Hi,
So I've been thinking about the implementation, and found out that the parentsMatch and listenerMatch functions have a lot of specifics (i.e v1alpha2.RouteParentStatus and v1alpha2.ListenerStatus are not checked the same way).
I can propose the following approach:
Extracting the test logic of a single elemnent for both type with functions of the following prototype:
func oneListenerMatch(eListener, aListener v1alpha2.ListenerStatus) bool
func oneParentMatchClosure(namespaceRequired bool) func(eParent, aParent v1alpha2.RouteParentStatus) bool
Note that oneParentMatchClosure is a Closure, to allow us building a matcher function with the same prototype as oneListenerMatch.
Then we can implement a generic function that iterates on every element of 2 lists passed as arguments with the following prototype:
func listMatch(t *testing.T, expected, actual []interface{}, matcher matcherFunction) bool
This is the function that can be updated to match elements regardless of their order. This generic listMatch function needs a matcherFunction as argument, that will be applied on every element of the lists.
type matcherFunction func(interface{}, interface{}) bool
Finally, the parentsMatch and listenersMatch functions would be rewritten like this:
func parentsMatch(t *testing.T, expected, actual []v1alpha2.RouteParentStatus, namespaceRequired bool) bool {
oneParentMatch = oneParentMatchClosure(namespaceRequired)
return listMatch(t, expected, actual, oneParentMatch)
}
func listenersMatch(t *testing.T, expected, actual []v1alpha2.ListenerStatus) bool {
return listMatch(t, expected, actual, oneListenerMatch)
}
These are my initial thoughts. I'll try to write an actual implementation of it pretty soon. Any comments are welcome.
Regards
Your proposed implementation seems sensible to me @FlorentFlament!
Hi,
I ended up writing a generic listMatch function (see PR):
func listMatch[elementType any](t *testing.T, expected, actual []elementType, matcher func(*testing.T, elementType, elementType) bool) bool {
if len(expected) != len(actual) {
t.Logf("Expected %d Gateway status listeners, got %d", len(expected), len(actual))
return false
}
// TODO(mikemorris): Allow for arbitrarily ordered elements
for i, e := range expected {
a := actual[i]
if !matcher(t, e, a) {
return false
}
}
return true
}
Regards
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-help
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.