pipelines
pipelines copied to clipboard
[sdk] ExitHandler doesn't compile with task Inputs
Environment
- KFP version:
2.0.3
(manifests v1.8 release) - KFP SDK version:
kfp 2.4.0
kfp-kubernetes 1.0.0
kfp-pipeline-spec 0.2.2
kfp-server-api 2.0.3
Issue is possibly related to https://github.com/kubeflow/pipelines/issues/9386.
Steps to reproduce
Execute a pipeline that contains an ExitHandler, whose exit_task takes in an input.
Simple example:
from kfp import dsl
from kfp.compiler import Compiler
@dsl.component
def test_step(input: str) -> str:
return "test"
@dsl.pipeline
def test_pipeline():
pre_task = test_step(input="pre")
# Replacing input with a static import works.
# Using dsl.Inputs/dsl.Outputs causes the same issue.
exit_task = test_step(input=pre_task.output)
with dsl.ExitHandler(exit_task=exit_task):
test_step(input="test")
Compiler().compile(test_pipeline, "test_pipeline.yaml")
Expected result
No error on compilation.
Instead:
C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\dsl\component_decorator.py:119: FutureWarning: Python 3.7 has reached end-of-life. The default base_image used by the @dsl.component
decorator will switch from 'python:3.7' to 'python:3.8' on April 23, 2024. To ensure your existing components work with versions of the KFP SDK released after that date, you should provide an explicit base_image argument and ensure your component works as intended on Python 3.8.
return component_factory.create_component_from_func(
Traceback (most recent call last):
File "c:\Users\User\Documents\git\rokuto\test.py", line 10, in <module>
@dsl.pipeline
^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\dsl\pipeline_context.py", line 65, in pipeline
return component_factory.create_graph_component_from_func(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\dsl\component_factory.py", line 669, in create_graph_component_from_func
return graph_component.GraphComponent(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\dsl\graph_component.py", line 68, in __init__
pipeline_spec, platform_spec = builder.create_pipeline_spec(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\compiler\pipeline_spec_builder.py", line 1854, in create_pipeline_spec
inputs = compiler_utils.get_inputs_for_all_groups(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\compiler\compiler_utils.py", line 266, in get_inputs_for_all_groups
_get_uncommon_ancestors(
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\kfp\compiler\compiler_utils.py", line 662, in _get_uncommon_ancestors
raise ValueError(task2.name + ' does not exist.')
ValueError: test-step-2 does not exist.```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍.
Is this a duplicate of https://github.com/kubeflow/pipelines/issues/9459, @connor-mccarthy?
/assign @connor-mccarthy
@TobiasGoerke, I don't think this is. This looks like an SDK bug, while https://github.com/kubeflow/pipelines/issues/9459 is a FR for the KFP open source backend.
Team, Any update on this, default hello-world pipeline is not working on kubeflow v1.8 (pipeline v2.0.3). kindly share the fix for the same.
Notebook Error:
Got same issue from basic example.
@subasathees, this is a deprecation warning. Consider migrating your components to dsl.component(base_image='python:3.8')
to prepare for the change in default value.
@subasathees, this is a deprecation warning. Consider migrating your components to
dsl.component(base_image='python:3.8')
to prepare for the change in default value.
Thanks for reply. I saw somewhere people discussed it's 3.7 issue and be disappeared once move to 3.8 (2024 I think). But what is strange is that I have 3.11 based custom build image.
@kabartay, is it possible you're using Python 3.7 in your compilation environment, despite using > Python 3.7 for the runtime environment?
I bumped into the same issue today on Python 3.10, found a workaround by wrapping the tasks (pre_task
and exit_task
in the example) in a dsl.pipeline
and pass it as exit_task
to ExitHandler
.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.
/reopen
I bumped into the same issue today on Python 3.10, found a workaround by wrapping the tasks (
pre_task
andexit_task
in the example) in adsl.pipeline
and pass it asexit_task
toExitHandler
.
The workaround helps, but the original issue does not seem to be resolved.
@AnnKatrinBecker: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
I bumped into the same issue today on Python 3.10, found a workaround by wrapping the tasks (
pre_task
andexit_task
in the example) in adsl.pipeline
and pass it asexit_task
toExitHandler
.The workaround helps, but the original issue does not seem to be resolved.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen
@HumairAK: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I bumped into the same issue today on Python 3.10, found a workaround by wrapping the tasks (
pre_task
andexit_task
in the example) in adsl.pipeline
and pass it asexit_task
toExitHandler
.
Can you help us by providing a code example of that workaround? It will help others to do the same while we work on a fix.
Hi, this issue prevents us from creating a Ray cluster with a unique name at KFP runtime, so we cannot concurrently execute the same KFP pipeline
/assign @hbelmiro
Exit tasks cannot depend on any other tasks. We can see this validation in the code. But that error is not shown. I've sent a PR to fix that.
The problem is in this line:
exit_task = test_step(input=pre_task.output)
It compiles if we change that line to not depend on pre_task
:
exit_task = test_step(input="some string")