ocis-charts icon indicating copy to clipboard operation
ocis-charts copied to clipboard

.Values.replicas should be independently set for each service

Open Deaddy opened this issue 2 years ago • 3 comments

The number replicas should be configurable independently for each service, i.e. drop .Values.replicas and then have .Values.services.$service.replicas for each $service we deploy.

Random thoughts:

  • I think it is fine to make this a breaking change and not do another layer of defaults/ifs in the templates to make sure it does work with .Values.replicas being set
  • this would also be more consistent with most other major helm charts
  • the progress of #15 would then also be reflected in the values file, as a non-scalable service would not have replicad field, making it a little bit more self-documented

Deaddy avatar Jun 30 '23 09:06 Deaddy

we also could treat replicas like we do resources. We have a global default setting and a per service setting, which wins over the global one. Or do you vote for only a per service option?

wkloucek avatar Jun 30 '23 14:06 wkloucek

Well, that is what I meant with the first point, I do not really think a global setting is useful enough to warrant the added complexity in the helm chart.

I also expect some components like nats to require quorum, so just having one large replica setting might also invite configuration accidents, and I guess most components do not need to scale beyond HA, whereas proxy and frontend probably need quite a few more replicas in most settings.

I think I would even argue that a global setting is kinda useless for resources as well, but having good defaults there might be more tricky than replicas: 1 for each service.

Deaddy avatar Jul 03 '23 10:07 Deaddy

I also expect some components like nats to require quorum

The builtin NATS does not support scaling / clustering. Therefore we have an example with an external NATS cluster. Currently you also need to ensure settings replicas for NATS streams, that's why we recently added NACK to the example to achieve this. See: https://github.com/owncloud/ocis-charts/tree/master/deployments/ocis-nats

There are no other components that have something like a quorum. But other components that cannot be scaled beyond one replica and should be replaced by external scalable / HA components if needed (IDM = LDAP, IDP = OIDC Provider -> see: https://github.com/owncloud/ocis-charts/tree/master/deployments/external-user-management). Other components are not yet scalable and tracked here: #15

so just having one large replica setting might also invite configuration accidents, and I guess most components do not need to scale beyond HA, whereas proxy and frontend probably need quite a few more replicas in most settings.

I totally get your point and agree that we should offer a replica setting per service.

I guess we should have the following logic when it comes to replicas / HPA settings:

  • apply global replicas setting to service if no global / service-specific HPA / service-specific replicas is set
  • apply service-specific replicas setting to service if no service-specific HPA is set
  • apply global HPA setting to service if no service-specific replicas / HPA is set
  • apply service-specific HPA setting to service if service-specific HPA is set

In short: service-specific setting wins. And HPA wins over the replicas setting when it has the same specifically (global / service-specific).

I think I would even argue that a global setting is kinda useless for resources as well, but having good defaults there might be more tricky than replicas: 1 for each service.

We have this because many service have a similar low need for resources and they can be configured together in this way. But still it's true that you need to check if resources are set correct in the end.

wkloucek avatar Sep 07 '23 13:09 wkloucek