K8sRunLauncher with pod_spec via HELM chart
Use Case
Assuming you use the K8sRunLauncher in the official HELM charts. ATM you cannot set something like nodeSelector or tolerations in the helm chart for the run pods. You can do this in case the CeleryK8sRunLauncher. There you can define e.g. nodeSelector for each pod spawned by the CeleryK8sRunLauncher.
In case of the K8sRunLauncher, we can define nodeSelector in the python Code like described here doc BUT we like to do all this in one single place in the values.yaml instead of having additional cluster config in the python code.
Ideas of Implementation
I like to use the K8sRunLauncher and be able to also define e.g. nodelSelector, tolerations, ... individually for the spawned run pods in the values.yaml of the helm chart like it is already possible with the CeleryK8sRunLauncher.
So please add in values.yaml:
runLauncher:
config:
k8sRunLauncher:
annotations: {} # to be added
nodeSelector: {} # to be added
affinity: {} # to be added
tolerations: [] # to be added
podSecurityContext: {} # to be added
securityContext: {} # to be added
Message from the maintainers:
Excited about this feature? Give it a :thumbsup:. We factor engagement into prioritization.
Thanks @TimoFriedri for raising this feature request. We'll get back to you soon about this
Alternatively something like:
deployments:
- name: ...
image: ...
port: ...
nodeSelector: ...
tolerations: ...
env: ...
run_config: #new section for the run/job pods
nodeSelector: ...
tolerations: ....
env: ...
would be nice. So it is independent from the chosen RunLauncher (RunLauncher could overwrite this)
Ths dagster jobs in the python code could still laos overwrite this. But in my case I could easily setup all pod config in the yaml file.
this would be really, really nice to avoid configuration duplication in each job
there are some plans on this issue?
yeah I would also greatly appreciate such a feature 👍. I'm facing the same exact problem
Is this a dupe of https://github.com/dagster-io/dagster/issues/4298 ?
This is now available as of dagster 1.1.8 with the runK8sConfig field on the Helm chart here: https://docs.dagster.io/deployment/guides/kubernetes/customizing-your-deployment#instance-level-kubernetes-configuration