OrchardCore
OrchardCore copied to clipboard
OrchardCore Clusters
Fixes #13636
Distributes requests across tenant clusters by using the Microsoft Yarp.ReverseProxy.
Work in progress but some first info.
-
We first use the Yarp Configuration allowing to define
Routes
andClusters
with many options. Each Route is tied to a Cluster composed ofDestination(s)
on which load balancing can be applied ... -
We only need one
catch-all
RouteTemplate
and multipleClusters
on which we can configure a customSlotRange[min, max]
property (up to 16384 slots). -
Each Tenant has an unique slot hash, so an unique
Slot
and then belongs to theCluster
having the slot in itsSlotRange
, theCluster
having multipleDestination(s)
. Note: We could have used aCluster
havingNodes
but we follow the Yarp Config having aClusters
list ofCluster
type. -
The same application can run as a proxy or behind it (we check the headers), the advantage with our distributed services, is when as a proxy we are still aware of all tenants data. So on a request we can use the same
RunningShellTable
to know the Tenant, then select the rightCluster
based on the Tenant slot hash (in a custom middleware), and letYarp
select one of itsDestination(s)
. -
To compute a Tenant slot hash we use the
CRC-16/XMODEM
algorithm (as Redis use for clustering keys) applied on the newTenantId
property, it allows to automatically spread out new tenants on the slots and then on the configuredClusters
. This knowing that theCRC-16
is fast to compute and always return the same number for the sameTenantId
, so a tenant stays on the sameCluster
. -
The distribution is not perfect with few tenants but gets better and better as the number increases.
TODO: Also coupled to a simple feature allowing to release a Tenant if not requested since a given time.