zenml icon indicating copy to clipboard operation
zenml copied to clipboard

Multiple output versions for a step outputs

Open avishniakov opened this issue 1 year ago • 9 comments

Describe changes

Highlights:

  • StepRunResponse.outputs is now Dict[str, List["ArtifactVersionResponse"]] to support multiple versions of the same artifact
  • StepNodeDetails.outputs is now Dict[str, List[str]] for the same reason
  • type is removed from StepOutputSchema and now resides in the ArtifactVersionSchema directly
  • Old types are DEFAULT and MANUAL appended with new types of artifacts: EXTERNAL and PREEXISTING for ExternalArtifact and register_artifact respectively.
  • ArtifactVersionResponse/Request now expects save_type

It is very shaky for the frontend, so I would keep it for a while until we align with @Cahllagerfeld

P.S. docs to follow after some reviews feedback being collected.

Pre-requisites

Please ensure you have done the following:

  • [x] I have read the CONTRIBUTING.md document.
  • [ ] If my change requires a change to docs, I have updated the documentation accordingly.
  • [x] I have added tests to cover my changes.
  • [x] I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • [ ] If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds functionality)
  • [x] Breaking change (fix or feature that would cause existing functionality to change)
  • [ ] Other (add details above)

avishniakov avatar Oct 10 '24 15:10 avishniakov

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

github-actions[bot] avatar Oct 10 '24 15:10 github-actions[bot]

I have more of a general question: Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling load_artifact(...).

schustmi avatar Oct 10 '24 15:10 schustmi

I have more of a general question: Your current implementation solves how an artifact was saved, but how do we expose the way it was loaded? E.g. even if an artifact is a regular step output, I can load it either the "normal" way by defining it as a step input, or manually by calling load_artifact(...).

True, I didn't touch inputs here at all, since it was not in the scope of the ticket. I would prefer to handle it separately. Do you refer to the DAG and how we define which input type is it? It would be hell complex design/FE wise, honestly speaking.

I can do this crazy thing and I have no clue how it would be shown:

@step
def step_1()->Annotated[int,"my_int"]:
    return 42

@pipeline
def pipe_1():
    step_1()
    step_2(Client().get_artifact_version("my_int"), after=["step_1"])

So it is a Schrödinger-Artifact now: it is DEFAULT and LAZY_LOADED at the same time...

avishniakov avatar Oct 10 '24 15:10 avishniakov

Classification template updates in examples/mlops_starter have been pushed.

github-actions[bot] avatar Oct 10 '24 15:10 github-actions[bot]

E2E template updates in examples/e2e have been pushed.

github-actions[bot] avatar Oct 10 '24 16:10 github-actions[bot]

I see, makes sense let's handle that separately. I think the problem is mostly on how to visualize what type of input it is right? Could just be a different colored line depending on the type type of input, but I'm sure Zuri will come up with something much better 😄 In general I think it's both, it's a regular output for the first step, but for the second step it's a coincidence that it was generated in the same run, and the general intention of the user was to load an artifact using the lazy loading, not using the regular way to pass artifacts, otherwise they would have done that ;)

schustmi avatar Oct 10 '24 16:10 schustmi

That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step

schustmi avatar Oct 10 '24 16:10 schustmi

That's also why the input type can't be stored in the artifact table, one artifact might be different types of inputs for different steps/runs or even the same exact step

Yep, I also realized that during development - this will stay separately as is it is now, but we would need to enrich it for more types beyond default and manual, IMO.

avishniakov avatar Oct 10 '24 16:10 avishniakov

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

github-actions[bot] avatar Oct 17 '24 16:10 github-actions[bot]

[!IMPORTANT]

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

coderabbitai[bot] avatar Oct 28 '24 16:10 coderabbitai[bot]

There is still some unrelated error on mac 3.9, but I will merge this in as is.

Using Python 3.9.20 environment at /Users/runner/hostedtoolcache/Python/3.9.20/x64
  × No solution found when resolving dependencies:
  ╰─▶ Because torch==2.4.0 has no wheels with a matching Python implementation
      tag and vllm>=0.6.0 depends on torch==2.4.0, we can conclude that
      vllm>=0.6.0 cannot be used.
      And because only the following versions of vllm are available:
          vllm<=0.6.0
          vllm==0.6.1
          vllm==0.6.1.post1
          vllm==0.6.1.post2
          vllm==0.6.2
          vllm==0.6.3
          vllm==0.6.3.post1
      and you require vllm>=0.6.0, we can conclude that your requirements
      are unsatisfiable.

avishniakov avatar Nov 07 '24 06:11 avishniakov