armada
armada copied to clipboard
fix: update armada to respect node pod limits
What type of PR is this?
Bug fix
What this PR does / why we need it:
This pull request updates armada to respect node limits. Without this change it is possible for Armada to try and schedule pods to nodes which do not have capacity in regards to concurrent running pods. When this happens the pods get stuck in a pending state and end up getting lease returned. Presumably Armada considers these pods towards fair share even though they cannot run.
- Configures default pod limits in the Armada server to include
pods - Configures the Armada scheduler to consider
podsresource in scheduling decisions by default - Updates the executor to sanitize pod resources before submitting pods to k8s
- Configures the executor to sanitize pod
podsresources by default
Which issue(s) this PR fixes:
Fixes #4515
Special notes for your reviewer:
In order to rollout safely the executors must be updated first. After that the scheduler/server can be rolled out in any order.
Hey so generally a good direction but I there are a few things I think we'd like to consider / possibly change
- Generally we're trying to move to a world where all mutation happens on the executor API, and the executor does less edits. This PR does not move us in that direction
- We already basically have this functionality in the executor API
- https://github.com/armadaproject/armada/blob/master/internal/scheduler/api.go#L158
- I think you could achieve what you want by simply adding a config option to
supportedResourceTypesto say if we send it to the executor
- I think we should debate if we add pods to more of the config:
- indexedResources - I think add it here
- dominantResourceFairnessResourcesToConsider - possibly add it here. It'd mean we fairshare on pods, which is probably correct in some cases (happy to leave this for now)
- Could we confirm if you overwrite the config it does actually remove pods (I can't remember if it merges or overwrites and I don't have time to test now)
- I'm somewhat debating if we should add it as
defaultJobLimitsas it means now the api + scheduler need to be configured in unison. It could just be a scheduler concern (tbh all of this PR could be just a scheduler concern) but I think this is probably the most configurable option for now.