eventing icon indicating copy to clipboard operation
eventing copied to clipboard

Multi-tenancy support

Open aliok opened this issue 2 years ago • 6 comments

Problem We have seen requests like these from users:

  1. We would like to make sure our broker/channel/sink is not accessible (no POST possible) from outside the resources in our namespace.
  2. We would like to keep our data in our specific Kubernetes nodes. This is to comply with the GDPR and data sovereignty regulations. A large cluster of ours is used by our departments in different countries and law requires us to keep the data on the nodes that are in the country where the data belongs to.
  3. We would like to isolate our dataplane.

These are the the things we have heard from our users over the past couple of months.

There are additional requests like these but I would like to skip them for now:

  • We want to install different versions of CRDs and Knative eventing
  • ...

Persona: Which persona is this feature for?

Exit Criteria A measurable (binary) test that would indicate that the problem has been resolved.

Time Estimate (optional): How many developer-days do you think this may take to resolve?

Additional context (optional) Add any other context about the feature request here.

aliok avatar Feb 22 '22 08:02 aliok

The definition of "multitanency" is actually a bit confusing. Some users use the term to have complete isolation/separation of resources while others and Knative community use it for sharing resources but serving multiple tenants. I think we first need to address this.

  1. We would like to make sure our broker/channel/sink is not accessible (cannot POST) from outside the resources in our namespace.

A similar discussion is happening for (1) at Serving here: https://github.com/knative/serving/issues/12533.

  1. We would like to keep our data in our specific Kubernetes nodes. This is to comply with the GDPR and data sovereignty regulations. A large cluster of ours is used by our departments in different countries and law requires us to keep the data on the nodes that are in the country where the data belongs to.

They cannot do that if the dataplane is handling resources from multiple tenants.

  1. We would like to isolate our dataplane.

Users are not sure if we need complete separation of dataplane pods or if they are ok with some sublevel separation (like separate threads)

aliok avatar Feb 22 '22 08:02 aliok

One option might be provide some abstraction on top of istio, where we find things like:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: blah
  namespace: knative-eventing
spec:
  action: ALLOW
  rules:
  - from:
    - source:
        namespaces: ["default", "knative-eventing"]
    to:
    - operation:
        methods: ["POST"]

for the different components we could that way ensure that only a given set of namespaces can POST to them. As an example

matzew avatar Feb 22 '22 15:02 matzew

Here is an issue to document the Istio case: https://github.com/knative/docs/issues/4823

matzew avatar Mar 15 '22 16:03 matzew

Tenants could be users. another issue for documentation: https://github.com/knative/docs/issues/4824

matzew avatar Mar 15 '22 16:03 matzew

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Aug 24 '22 01:08 github-actions[bot]

/triage accepted

pierDipi avatar Sep 23 '22 06:09 pierDipi

Another option we're exploring in Serving for making NetworkPolicy work is to use different destination ports on the same Pod. You can then use L4 (TCP) NetworkPolicy to control access to specific ports on the Broker.

Note that this still doesn't provide any type of authn/authz about which identities can send which types of events, but it at least ensures that entities not in the same Namespace aren't authorized to send to that endpoint.

Kubernetes Services can point to a targetPort which isn't listed in the containers[*].ports for the Pod, which allows you to expand to several thousand available ports on a multi-tenant destination if desired while maintaining network isolation (assuming you restrict end-users ability to create Endpoints directly).

evankanderson avatar Nov 03 '22 18:11 evankanderson

We're going to have some kind of traffic limiting instead with the internal TLS work we're doing.

@creydr any issues can you link here?

aliok avatar Jul 06 '23 15:07 aliok