Multiple output versions for a step outputs
Describe changes
Highlights:
StepRunResponse.outputsis nowDict[str, List["ArtifactVersionResponse"]]to support multiple versions of the same artifactStepNodeDetails.outputsis nowDict[str, List[str]]for the same reasontypeis removed fromStepOutputSchemaand now resides in theArtifactVersionSchemadirectly- Old types are
DEFAULTandMANUALappended with new types of artifacts:EXTERNALandPREEXISTINGforExternalArtifactandregister_artifactrespectively. ArtifactVersionResponse/Requestnow expectssave_type
It is very shaky for the frontend, so I would keep it for a while until we align with @Cahllagerfeld
P.S. docs to follow after some reviews feedback being collected.
Pre-requisites
Please ensure you have done the following:
- [x] I have read the CONTRIBUTING.md document.
- [ ] If my change requires a change to docs, I have updated the documentation accordingly.
- [x] I have added tests to cover my changes.
- [x] I have based my new branch on
developand the open PR is targetingdevelop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop. - [ ] If my changes require changes to the dashboard, these changes are communicated/requested.
Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [x] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Other (add details above)
LLM Finetuning template updates in examples/llm_finetuning have been pushed.
I have more of a general question:
Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling load_artifact(...).
I have more of a general question: Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling
load_artifact(...).
True, I didn't touch inputs here at all, since it was not in the scope of the ticket. I would prefer to handle it separately. Do you refer to the DAG and how we define which input type is it? It would be hell complex design/FE wise, honestly speaking.
I can do this crazy thing and I have no clue how it would be shown:
@step
def step_1()->Annotated[int,"my_int"]:
return 42
@pipeline
def pipe_1():
step_1()
step_2(Client().get_artifact_version("my_int"), after=["step_1"])
So it is a Schrödinger-Artifact now: it is DEFAULT and LAZY_LOADED at the same time...
Classification template updates in examples/mlops_starter have been pushed.
E2E template updates in examples/e2e have been pushed.
I see, makes sense let's handle that separately. I think the problem is mostly on how to visualize what type of input it is right? Could just be a different colored line depending on the type type of input, but I'm sure Zuri will come up with something much better 😄 In general I think it's both, it's a regular output for the first step, but for the second step it's a coincidence that it was generated in the same run, and the general intention of the user was to load an artifact using the lazy loading, not using the regular way to pass artifacts, otherwise they would have done that ;)
That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step
That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step
Yep, I also realized that during development - this will stay separately as is it is now, but we would need to enrich it for more types beyond default and manual, IMO.
LLM Finetuning template updates in examples/llm_finetuning have been pushed.
[!IMPORTANT]
Review skipped
Auto reviews are disabled on this repository.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
There is still some unrelated error on mac 3.9, but I will merge this in as is.
Using Python 3.9.20 environment at /Users/runner/hostedtoolcache/Python/3.9.20/x64
× No solution found when resolving dependencies:
╰─▶ Because torch==2.4.0 has no wheels with a matching Python implementation
tag and vllm>=0.6.0 depends on torch==2.4.0, we can conclude that
vllm>=0.6.0 cannot be used.
And because only the following versions of vllm are available:
vllm<=0.6.0
vllm==0.6.1
vllm==0.6.1.post1
vllm==0.6.1.post2
vllm==0.6.2
vllm==0.6.3
vllm==0.6.3.post1
and you require vllm>=0.6.0, we can conclude that your requirements
are unsatisfiable.