linkerd2 icon indicating copy to clipboard operation
linkerd2 copied to clipboard

The possibility to use collection of `Server` resources as `targetRef` in `AuthorizationPolicy`

Open aatarasoff opened this issue 3 years ago • 3 comments

What problem are you trying to solve?

Old ServerAuthorization can be attached to a bunch of Server resources via selectors. It helps to make some common policies that avoids duplication and makes policy management easier.

For example, at the moment we can grant an access from nginx ingress for a list of Servers that really needs it:

---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
  namespace: cool-namespace
  name: linkerd-ingress-allow
spec:
  server:
    selector:
      matchLabels:
        linkerd.io/nginx-enabled: true
  client:
    meshTLS:
      identities:
        - "*.ingress.serviceaccount.identity.linkerd.cluster.local"

Or, we can effectively organise app-to-app access policies. In ANNA Money developers can use dependencies property in an app definition:

...
dependencies:
  - serviceA
  - serviceB
  ...
  - serviceN

We have some conventions that use this definition. And it is native way to use this property to grant an access from one app to another. Now, we use the trick to generate policies automatically based on this property in the app definition:

{% if dependencies | length > 0 %}
---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
  name: {{ app_name }}-deps
  namespace: {{ approved_namespace }}
spec:
  server:
    selector:
      matchExpressions:
        - {key: "anna.money/app", operator: In, values: {{ dependencies }}}
  client:
    meshTLS:
      identities:
        - "{{ app_name }}-sa.{{ app_namespace }}.serviceaccount.identity.linkerd.cluster.local"
{% endif %}

This approach has main benefit: it doesn’t require any new properties or modifications (such as a list of approved apps). We can do it now because we have a possibility to group a bunch of Servers via selectors.

Because new policies don't have selector option, it leads to a lot of policies duplication and harder management.

How should the problem be solved?

I could suggest something like a new resource like ServerList or so, where we can, at least, define a list of Server resources and can refer to it. In my opinion, it is still less convenient way than selectors but it can solve the problem because we can create the list automatically. Furthermore, it may be inverted, like an extension of the current Server resource definition.

Example:

---
kind: ServerGroup
metadata:
  name: some-app-deps
spec:
  servers:
  - serviceA-server
  ...
  - serviceN-server
---
kind: Server
...
spec:
  # common groups / common policies
  serverGroups: 
  - nginx
  - metrics
---
kind: AuthorizationPolicy
metadata:
  name: some-app-deps-policy
spec:
  targetRef:
    group: policy.linkerd.io
    kind: ServerGroup
    name: some-app-deps

Any alternatives you've considered?

Back to selectors)

How would users interact with this feature?

No response

Would you like to work on this feature?

yes

aatarasoff avatar Sep 02 '22 07:09 aatarasoff

Another potential alternative could be to just allow a list of targetRefs in the AuthorizationPolicy resource, similarly to how HTTPRoute accepts a list of parentRefs:

---
kind: AuthorizationPolicy
metadata:
  name: some-app-deps-policy
spec:
  targetRefs:
    - group: policy.linkerd.io
      kind: Server
      name:  serviceA-server
    - group: policy.linkerd.io
      kind: Server
      name:  serviceB-server
      # ...

This could be a bit simpler than adding a new resource type for creating a list of servers?

hawkw avatar Sep 02 '22 16:09 hawkw

@hawkw, the approach can solve one part of the issue (about dependencies). However, it leads to breaking changes or introducing one more field with a similar name that could confuse users.

Moreover, the approach doesn't help to solve another part of the issue. To use common policies, we still need something that implicitly allows us add a new Server to them. Yes, we can create a common policy as follow:

---
kind: AuthorizationPolicy
metadata:
  name: external-ingress-allow
spec:
  targetRefs:
    - group: policy.linkerd.io
      kind: Server
      name:  serviceA-server
    - group: policy.linkerd.io
      kind: Server
      name:  serviceB-server
      # ...

However, we face the issue of maintaining this policy each time developers add a new app to the mesh. Or, change the app configuration, for example, expose it to the internet:

---
kind: AuthorizationPolicy
metadata:
  name: external-ingress-allow
spec:
  targetRefs:
    - group: policy.linkerd.io
      kind: Server
      name:  serviceA-server
    - group: policy.linkerd.io
      kind: Server
      name:  serviceB-server
    # new service was introduced
    - group: policy.linkerd.io
      kind: Server
      name:  newSuperService-server

In my opinion, it is not convenient on the scale.

To fix this part of the issue without selectors, we should have the possibility to somehow attach a Server to a target collection:

---
kind: Server
...
spec:
  # common groups / common policies
  serverGroups: 
  - external-ingress
  - internal-ingress
  - metrics

How can we do it without the definition of the group?

aatarasoff avatar Sep 02 '22 20:09 aatarasoff

The current scheme is derived from https://gateway-api.sigs.k8s.io/references/policy-attachment/#target-reference-api, which states:

Each Policy resource MUST include a single targetRef field. It MUST not target more than one resource at a time, but it can be used to target larger resources such as Gateways or Namespaces that may apply to multiple child resources.

So, with that in mind, I'm not inclined to change AuthorizationPolicy. It may be appropriate to have something like a ServerGroup resource, though.

olix0r avatar Sep 07 '22 15:09 olix0r

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Dec 08 '22 17:12 stale[bot]

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Apr 20 '23 11:04 stale[bot]