[sdk] Using kfp v2.0+ not able to add pod labels in kubeflow pipeline workflow
Hello All,
Environment KFP version: 2.2.0 kubeflow verison: 1.8 KFP SDK version: 2.7.0
Background We are trying to add a pod label to inject istio sidecar in kubeflow pipeline pod then we can use service mesh to do authentication due to project requirement.
Issue If we use kfp v1 and add parameter kubernetes.add_pod_label(task=task,label_key="sidecar.istio.io/inject",label_value="true") in the code, then use KFP SDK v1 to compile the script. label can be injected in related workflow and run successfully. However if we add the same parameter in kfp v2 script and use kfp SDK v2 to compile, l can see the label in compiled file showed as below
platform_spec:
platforms:
kubernetes:
deploymentSpec:
executors:
exec-load:
podMetadata:
labels:
sidecar.istio.io/inject: 'true'
However when l try to upload the compiled file and execute it via kubeflow pipeline, the label doesn't be injected in related workflow.
Question 1.If this parameter kubernetes.add_pod_label still available in KFP v2? 2.How can l add label in kubeflow pipeline workflow yaml if using kfp v2 since not able to find any information on kubeflow official doucument?
Regards & thanks!
Not sure if this help you but as a workaround you can enforce istio injection label on namespace resource
kind: Namespace
name: dummy
metadata:
labels:
istio-injection: enabled
then every POD in dummy namespace will be part of service mesh.
https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#controlling-the-injection-policy
Hi @daro1337,
Thank you for your suggestion, actually the istio-injection label already existed in namespace level. However it seems when we created kubeflow pipeline it not able to inject istio sidecar automatically, that's why we try to add label manually.
Regards & thanks
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.
/re-open this looks like we're missing the driver side of this functionality
/remove-lifecycle stale
/assign mprahl
@HumairAK: GitHub didn't allow me to assign the following users: mprahl.
Note that only kubeflow members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign mprahl
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'd be interested in taking a look!
/assign @mprahl
@emilyyujieli is this still an issue for you? I tried a pipeline with the following YAML snippet:
platforms:
kubernetes:
deploymentSpec:
executors:
exec-get-data:
podMetadata:
labels:
sidecar.istio.io/inject: 'true'
The "impl" pods do get the label:
labels:
pipeline/runid: b6c3c2b6-30aa-4c30-95d5-e71e19e683c3
pipelines.kubeflow.org/v2_component: "true"
sidecar.istio.io/inject: "true" # <--- injected here
workflows.argoproj.io/completed: "false"
workflows.argoproj.io/workflow: dsl-input-6lntt
name: dsl-input-6lntt-system-container-impl-2123244313
Where as the "driver" pods do not:
labels:
pipeline/runid: b6c3c2b6-30aa-4c30-95d5-e71e19e683c3
pipelines.kubeflow.org/v2_component: "true"
workflows.argoproj.io/completed: "true"
workflows.argoproj.io/workflow: dsl-input-6lntt
name: dsl-input-6lntt-system-container-driver-2247651804
Was the expectation that the label be set on all pods and not just the "impl" pods?
Tracking this down further, it seems this should work since 2.1 based on https://github.com/kubeflow/pipelines/commit/b3978c1e98a6aa119d5411315dd6ebe8d79ef0f9.
Okay it sounds like this was already addressed for launcher pods, if the same is needed for driver pods, let's create a new more explicit issue targeting driver pods as such.