OrchardCore icon indicating copy to clipboard operation
OrchardCore copied to clipboard

OrchardCore Clusters

Open jtkech opened this issue 1 year ago • 2 comments

Fixes #13636

Distributes requests across tenant clusters by using the Microsoft Yarp.ReverseProxy.

Work in progress but some first info.

  • We first use the Yarp Configuration allowing to define Routes and Clusters with many options. Each Route is tied to a Cluster composed of Destination(s) on which load balancing can be applied ...

  • We only need one catch-all RouteTemplate and multiple Clusters on which we can configure a custom SlotRange[min, max] property (up to 16384 slots).

  • Each Tenant has an unique slot hash, so an unique Slot and then belongs to the Cluster having the slot in its SlotRange, the Cluster having multiple Destination(s). Note: We could have used a Cluster having Nodes but we follow the Yarp Config having a Clusters list of Cluster type.

  • The same application can run as a proxy or behind it (we check the headers), the advantage with our distributed services, is when as a proxy we are still aware of all tenants data. So on a request we can use the same RunningShellTable to know the Tenant, then select the right Cluster based on the Tenant slot hash (in a custom middleware), and let Yarp select one of its Destination(s).

  • To compute a Tenant slot hash we use the CRC-16/XMODEM algorithm (as Redis use for clustering keys) applied on the new TenantId property, it allows to automatically spread out new tenants on the slots and then on the configured Clusters. This knowing that the CRC-16 is fast to compute and always return the same number for the same TenantId, so a tenant stays on the same Cluster.

  • The distribution is not perfect with few tenants but gets better and better as the number increases.

TODO: Also coupled to a simple feature allowing to release a Tenant if not requested since a given time.

jtkech avatar May 02 '23 04:05 jtkech