[feature] Add ability to specify node affinity & toleration using KFP V2
Feature Area
/area sdk
What feature would you like to see?
A core production use case of KFP involves being able to run CPU and GPU workload on specific nodegroups that are more powerful and different from the nodegroup where Kubeflow is installed and usually they will have autoscaling as well. In order to achieve this, we used to be able to simply specify which component would run on which node using node affinity + tolerations. This is no longer possible in KFP v2 yet I feel like such a core feature should be supported.
What is the use case or pain point?
The existing set_accelerator_type is far from being flexible enough to allow such use case. Here is a small list of examples that shows that set_accelerator_type is not flexible enough to support production use cases :
- Does not work if the GPU is not one of the few (3) supported GPU :
NVIDIA_TESLA_K80,TPU_V3orcloud-tpus.google.com/v3. Otherwise we must use the genericnvidia.com/gpuwhich is not precise hence defeating the purpose of selecting an accelerator. - If you have 2 nodegroups with the same GPU but one should be reserved for inference and one should be reserved for pipeline exectuion (eg: training) then there is no way to cover such distinction purely based on
set_accelerator_type('nvidia.com/gpu'). - this method is only meant to be used for GPU but it is common to want to run CPU workload on specific nodegroups, reasons could include nodegroup isolation (to run workload that wont affect the nodegroup where Kubeflow core pods run) or to allow for more powerful CPU nodegroups to be used for pipeline while Kubeflow would remain on cheaper instances.
Is there a workaround currently?
Users can try to use external tools such as Kyverno to create mutating rules that a webhook can use to add a toleration and/or node affinity/node selector based on some predefined criteria such as a label name and value.
It's still a pain since it makes it way more involved than being able to use .add_node_affinity() and .add_toleration() on a component. In fact, we can't even add a label using kfp sdk anymore so matching has to be done on labels that are present by chance (we have no control to explicitly ensure their presence).
Also even using Kyverno, some cases might be hard or impossible to cover. For instance, if you have 2 kubeflow components, both will have the same labels yet you'd like one to run on a less expensive GPU nodegroup and only have one of the two run on a more powerful GPU nodegroup, then in that case since the pods have the same labels, the only way to specify which nodegroup it should run on is at component definition time (via KFP sdk) yet this is not supported currently in KFP v2.
Given that Kubeflow's main goal is to lower the barrier to run ML on kubernetes, I believe this workaround goes against such goal and should not be the only solution that is available to people. It would be in everyone's best interest if the KFP SDK adds back the add_node_affinity() and add_toleration() so that data scientists/ML specialists can easily specify where to run each component instead of relying on more advanced MLOps solutions that require more and more Kubernetes knowledge.
Love this idea? Give it a 👍.
Additionally would be great to have the ability to set requests/limits for custom resources. cpu, memory and nvidia.com/gpu are obviously staples and cover most of the necessary resource requests/limits, but being able to use and experiment with other custom resources (e.g. to make GPU sharing between containers possible) is a big plus too. So in addition to the above, would like to see add_resource_request() and add_resource_limit() back in the new versions of KFP SDK.
Hello @AlexandreBrown , thanks for proposing this. The node selector is already supported. https://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/ For the node affinity and toleration is awaiting contributors!
Hello @connor-mccarthy, as I have a high need for this feature, I already have an implementation. I would love to contribute it. How should we proceed? Can I open a PR or do we need to do a Design Review first? (CLA is already submitted)
Hi, @mcrisafu! Thanks for your interest in contributing this.
I think this feature is sufficiently large to be deserving of a design. Please feel free to start there. You can add the doc to this issue. From there, we can decide whether it makes sense to discuss at an upcoming KFP community meeting [Meeting, Agenda].
Hi @connor-mccarthy, thank you for your feedback. Here the requested design doc.
@Linchin, when you have the chance, could you please take a look at this FR for the KFP BE?
@mcrisafu Thank you for writing the design doc, which includes the general idea. Could you expand it to include more implementation details, ambiguities, potential challenges etc.?
I would like to draw some attention to this topic. There is another issue referencing this one (https://github.com/kubeflow/pipelines/issues/9768) where the implementation of tolerations and affinity is mentioned as part of a bigger plan. @cjidboon94 and @Linchin interacted with that one, but I think @connor-mccarthy and @AlexandreBrown not yet.
We really need this feature and I believe a lot of people do. Node selection, toleration and affinity settings are essential parts of effective pod scheduling in Kubernetes.
I offered help in that thread and could go ahead and start implementing these features, however I found this thread where @mcrisafu mentioned: "...I already have an implementation".
I would love to contribute, but of course there's no point doing duplicated work. Therefore, I ask:
- Is there anything blocking the progress of this feature?
- Is the implementation fully done or, if not, how advanced is it?
- Can we help you in any way to accelerate the process?
We are happy to jump into code review, testing or the implementation itself, should it still be missing some (or several) parts.
Thank you, @schrodervictor. I haven't had the time yet to update the document. I also believe that the suggestion from @cjidboon94 in #9768 is much better than my own "implementation."
We have decided not to migrate to KFP v2 just yet, as we have several issues beyond just toleration. We would greatly appreciate having #9768 implemented. Unfortunately, I don't understand the code base (and Go) well enough to do this myself.
From my perspective, it would be better to prioritize pushing the other feature instead of going with my sub-optimal hack. However, if you're still interested in the code, please take a look at this commit.
A related PR: https://github.com/kubeflow/pipelines/pull/9913
This is a critical feature...
Any updates on this? We really need this in order to migrate users to V2. Also, we'd be down to contribute to implementation.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Commenting so this doesn’t get closed. The feature is still needed.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Commenting so this doesn’t get closed. The feature is still very needed.
+1
/lifecycle frozen
Happy to tackle affinity support, bandwidth permitting.
@droctothorpe Thank you for volunteering! There is a draft PR to cover that issue, I pinged you in there so you can talk to the PR author to team up and finish the implementation.
@rimolive @schrodervictor @strickvl @cjidboon94 @HumairAK , Hello All, Does Kubeflow Pipelines (KFP) v2 support affinity and tolerations? We recently migrated from KFP v1 to v2, and these features no longer appear to be functioning as expected. If affinity and tolerations are supported in KFP v2, could someone provide guidance on how to configure them properly?
@venksam Hi, you can use the kfp-kubernetes package. It has an add_toleration method. I'm not entirely sure about the current state of affinity support...
Hello @mcrisafu I have used this but not working any other method to implement the affinity or scheduling strategy in v2 version?
@venksam toleration is supported today, not affinities:
kubernetes.add_toleration(
task,
key="key2",
operator="Equal",
value="value1",
effect="NoSchedule",
)
I am currently working on adding support for adding these as input parameters as well, currently they require hardcoding it into the pipeline.
affinity is part of the api, but is missing the underlying implementation, we can repurpose this issue to track adding affinity support, it should be fairly straight forward to add
affinity is part of the api, but is missing the underlying implementation, we can repurpose this issue to track adding affinity support, it should be fairly straight forward to add
So what is the current status for the affinity selection? Is it already implemented?
affinity is part of the api, but is missing the underlying implementation, we can repurpose this issue to track adding affinity support, it should be fairly straight forward to add
So what is the current status for the affinity selection? Is it already implemented?
@christian-heusel Nope. There's at least two attempts at implementing though:
- https://github.com/kubeflow/pipelines/pull/9913
- https://github.com/kubeflow/pipelines/pull/10672 Anyone is free to pick it up again.
as an update, toleration with parameterization support was added in KFP 2.5, let's try to get node affinity support in 2.6
Hello @HumairAK Does KFP V2 supports the affinity now ?
hey @venksam parameterization of affinities is still in progress, current plan is still to support this for next release
note that hard coding affinities should still be possible today, it's parameterization via input parameters is what is not yet available and targetted for next release
resolved by: https://github.com/kubeflow/pipelines/pull/12028