linkerd2 icon indicating copy to clipboard operation
linkerd2 copied to clipboard

Granular Multi-cluster exports

Open hdiass opened this issue 2 years ago • 4 comments

What problem are you trying to solve?

So i have 3 clusters connected with linkerd, all both ways so we have 6 links. So I install one app on one namespace and i want its service to be exported for only 1 of those clusters and not both other two. Is it possible to have this granular control, so like when exporting passing also another label that controls for which clusters is it exported to.

How should the problem be solved?

label/annotation that controls for which clusters is it exported to

Any alternatives you've considered?

No ways around it unfortunatly, have to completely disable it because i had 2 conflicting operators, rook and linkerd in this scenario.

How would users interact with this feature?

label/annotation that controls for which clusters is it exported to

Would you like to work on this feature?

No response

hdiass avatar Jul 11 '22 14:07 hdiass

I have not tested this myself but this should be doable already and I'm curious if it works well for you.

What you are asking for should already exist, but in an opposite fashion; instead of a label controlling which cluster a service is exported to, you can link the clusters in a way such that a cluster will only mirror services that match a specific set of labels.

You can specify the linkerd multicluster link's label-selector flag to say "only mirror services that match these labels". Then, on the remote cluster you can label a service such that the clusters linked to it end up mirror only certain services but not others.

kleimkuhler avatar Jul 11 '22 19:07 kleimkuhler

I don't think that's the expected behaviour i want/need..i think that what you mean is just the flag that triggers the export which by default is (mirror.linkerd.io/exported=true), but customizing this won't let that granular control i want to have (which is not per cluster but would be per service for example). I think it would be possible to have a bit of this control with what you mention but i think the best option would be to have this control at annotations level, instead of the entire cluster.

hdiass avatar Jul 12 '22 09:07 hdiass

Or another option would be apply specific "ignore" flag on a namespace would avoid linkerd to create services from other clusters in this namespace, i think this one would be glorious.

hdiass avatar Jul 12 '22 09:07 hdiass

I may not have explained it well enough, but you are able to do this with the current Link features. I'll try to provide an example which may illustrate this more clearly.

Let's say we have 3 clusters X, Y, and Z. On cluster X, we have foo and bar services. We want to mirror foo on cluster Y and bar on cluster Z.

Service foo has the following labels

...
kind: Service
metadata:
  name: foo
  labels:
    config.linkerd.io/exported: "true"
    mirror-on: cluster-Y
...

Service bar has the following labels

...
kind: Service
metadata:
  name: bar
  labels:
    config.linkerd.io/exported: "true"
    mirror-on: cluster-Z
...

Now, we link cluster X to Y and want to make sure that only foo is exported by setting the label selector

$ linkerd --context X multicluster link --cluster-name cluster-X --selector "mirror.linkerd.io/exported=true,mirror-on=cluster-Y" |kubectl --context Y apply -f -

And we link cluster X to Z with a similar command, but changing the label selector

$ linkerd --context X multicluster link --cluster-name cluster-X --selector "mirror.linkerd.io/exported=true,mirror-on=cluster-Z" |kubectl --context Z apply -f -

In the end state, X is linked to both Y and Z. However, cluster Y only mirrors services with the mirror-on: cluster-Y label, and cluster Z only mirrors services with the mirror-on: cluster-Z label.

I chose mirror-on: cluster-... arbitrarily; it can be any label you want. What I'm trying to illustrate is that the label selector you use in the linkerd multicluster link command dictates which services actually get mirrored.

Does that help?

kleimkuhler avatar Jul 15 '22 19:07 kleimkuhler

@hdiass I'm going to close this for now but let me know if you have any more questions.

kleimkuhler avatar Aug 26 '22 19:08 kleimkuhler