cross-cluster-connectivity
cross-cluster-connectivity copied to clipboard
Status on GatewayDNSRecord
Use case
As an operator of the CAPI-DNS system, I'd like to be able to learn whether that system is working.
In particular, whether the config I provided in the management cluster is correct, so that the CAPI-DNS controller can discover the IP addresses at which a gateway is reachable.
Possible Solution
If the CAPI-DNS controller in the management cluster is unable to discover the IPs of a particular gateway, that error should be visible in a Status on the GatewayDNSRecord. Perhaps as a Condition?
Conversely, if the gateway IPs were discovered, then I should be able to see a success message and last-reconciled sort of timestamp.
If the CAPI-DNS controller in the management cluster is unable to resolve the IPs of a particular gateway, that error should be visible in a Status on the GatewayDNSRecord. Perhaps as a Condition?
Is it valuable for status to be based on what the management cluster can resolve? Should this be from the workload cluster? And if so, would that require a status update from workload -> management cluster?
Would it be crazy to add a condition to Cluster?
Ah, this was just my writing being unclear. I don't mean "DNS resolution", I mean the ability of the CAPI-DNS controller to discover the IP addresses that it should put into the EndpointSlices that it creates.
- given a GatewayDNSRecord, find the (service-side) workload Cluster it refs, and the Secret to login to that cluster's APIserver
- in that apiserver, go find either the loadbalancer IP or the host/pod IPs of the gateway's proxy instances
I suppose Cluster condition could work, but that somehow feels indirect.... Because the user-intent to provide DNS resolution is elsewhere (CR or ConfigMap for this CAPI-DNS controller). If everything else about the Cluster is fine, but I've just typo'd my config for the DNS controller, I'd rather see that close to my typo, and not have it show up for other systems that are more concerned with LCM of the cluster itself...
Updated issue description for clarity.
I suppose Cluster condition could work, but that somehow feels indirect.... Because the user-intent to provide DNS resolution is elsewhere (CR or ConfigMap for this CAPI-DNS controller). If everything else about the Cluster is fine, but I've just typo'd my config for the DNS controller, I'd rather see that close to my typo, and not have it show up for other systems that are more concerned with LCM of the cluster itself...
Makes sense.
For first design phase, let's take a look at what Contour and Knative are doing for Status and Conditions on their resources. I think there's some good thinking behind their current APIs, that we can emulate, and let us start with a minimal valuable status and also enable extensibility for the future.