mimir
mimir copied to clipboard
[Helm] Add schedulerName support in StatefulSets to allow for certain storage providers to work.
What this PR does
Adds support for setting a schedulerName for services that use PVCs, in cases where a storage provider that requires this option are in use, such as https://github.com/libopenstorage/stork
Which issue(s) this PR fixes or relates to
Fixes #3139
Checklist
- [ ] Tests updated
- [ ] Documentation added
- [x]
CHANGELOG.mdupdated - the order of entries should be[CHANGE],[FEATURE],[ENHANCEMENT],[BUGFIX]
The CHANGELOG has just been cut to prepare for the next Mimir release. Please rebase main and eventually move the CHANGELOG entry added / updated in this PR to the top of the CHANGELOG document. Thanks!
@pracucci Done :)
Is it possible that the user needs different schedulers for different components?
So far, this is the first time I've heard of a user needing to override the scheduler at all, but it seems it is only required for components that have PVCs, hence the change.
I did consider whether we'd want to override the scheduler of all components, but for this PR decided not to go that complex with it given it was not asked for.
Overall, the choices would be:
-
Allow global override of scheduler for all components - but this lacks granularity if there is a use-case such as
storkwhere only the components needing PVCs need to be overridden. -
Allow overriding scheduler of components needing PVCs but not globally (what I have done here).
-
Allow overriding of scheduler for every component individually (this adds the most changes to the chart, though you could argue leaving them commented out by default reduces the possible impact by a lot).
What do you think, @krajorama ?
Is it possible that the user needs different schedulers for different components?
So far, this is the first time I've heard of a user needing to override the scheduler at all, but it seems it is only required for components that have PVCs, hence the change.
I did consider whether we'd want to override the scheduler of all components, but for this PR decided not to go that complex with it given it was not asked for.
Overall, the choices would be:
1. Allow global override of scheduler for all components - but this lacks granularity if there is a use-case such as `stork` where only the components needing PVCs need to be overridden. 2. Allow overriding scheduler of components needing PVCs but not globally (what I have done here). 3. Allow overriding of scheduler for every component individually (this adds the most changes to the chart, though you could argue leaving them commented out by default reduces the possible impact by a lot).What do you think, @krajorama ?
Finally got around to reading https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/ . So from your description and this now I understand it's independent from PVCs.
Option 1 as you say lacks granularity.
Option 2 introduces a third kind of global, which I feel can be confusing. Currently we have truly global settings and settings that are applied to components we control directly, but not to MiniIO, rollout operator, agent operator which come from subcharts.
Option 3 is what we've been doing in the past: just add a new setting to each component individually and make it global if there's a strong usecase (like env, envFrom to supply secrets to all configurations). I think this is the simplest to understand and for us to test so I would do this.
So I would just add this to every component in the values file (e.g. alertmanager) above the service settings:
alertmanager:
...
# -- Optionally set the scheduler for pods of the alertmanager
schedulerName: ""
service:
...
And then in the spec:
{{- with .Values.alertmanager.schedulerName }}
schedulerName: {{ . }}
{{- end }}
(This means the same as if , but . inside is replaced with the schedulerName! https://helm.sh/docs/chart_template_guide/control_structures/#modifying-scope-using-with)
Do you have the time to do this? I can create the PR tomorrow (yeah I'm working on Saturday).
I'm taking this over to do my proposed solution, I'll be rebasing the branch @eamonryan