Dan Winship
Dan Winship
> So for the node-level case it's easy to ensure that your endpoints are distributed correctly because you just use a DaemonSet. Maybe we need some way to easily configure...
Also, after looking at the NodeLocal DNS Cache stuff, I realized that the DNS use case actually has _two_ odd properties: 1. you want DNS traffic to stay on the...
> @danwinship can you elaborate more what you have in mind? I haven't dug into all the details of that feature, but the vague idea was that you'd set some...
> spec: > topologyRoutingConstraints: > - topologyKey: hostname > whenUnsatisfiable: RouteAnyway > - topologyKey: zone > whenUnsatisfiable: DoNotRoute No, I wasn't thinking that you'd specify the topology key in the...
> it seems like we'd likely need to add some subset of that to EndpointSlices ah, right > A concern I'd have is that scheduling preferences can be different for...
OK, try on this idea: There are exactly 2 tenancy use cases: 1. Overridable isolation: Traffic within tenants should be Allowed as though by a BANP rule which has higher...
Going back to what I was saying above about "cluster default policy", what about: ``` type ClusterDefaultPolicySpec struct { // tenancyLabels defines the labels used for dividing namespaces up into...
> I am still not on board with creating a new `NetworkTenancy` object..... (And I think generally most of the others are of the same opinion) I agree we should...
> and it will have the problem that different policies can have different definitions of tenancy, even though that makes the implementation more complicated and there are no user stories...
Right. So I'm leaning toward either: ### 1. A single external tenant definition, arbitrary rules in ANPs/BANPs: ``` kind: TenantDefinition metadata: name: default spec: labels: ["user"] ``` and then (based...