seldon-core
seldon-core copied to clipboard
Bump mlflow from 2.5.0 to 2.10.0 in /servers/mlflowserver/models/elasticnet_wine
Bumps mlflow from 2.5.0 to 2.10.0.
Release notes
Sourced from mlflow's releases.
MLflow 2.10.0
In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).
New MLflow Website
We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.
Model Signature Supports Objects and Arrays (#9936,
@serena-ruan)Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.
Langchain Autologging (#10801,
@serena-ruan)LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.
Prompt Templating for Transformers Models (#10791,
@daniellok-db)The MLflow
transformersflavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!MLflow Deployments Server Enhancement (#10765,
@gabrielfu; #10779,@TomeHirata)The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.
Further Document Improvements
Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!
Other Features:
- [Models] Enhance the MLflow Models
predictAPI to serve as a pre-logging validator of environment compatibility. (#10759,@B-Step62)- [Models] Add support for Image Classification pipelines within the transformers flavor (#10538,
@KonakanchiSwathi)- [Models] Add support for retrieving and storing license files for transformers models (#10871,
@BenWilson2)- [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603,
@C-K-Loan)- [Models] Automatically convert OpenAI input messages to LangChain chat messages for
pyfuncpredict (#10758,@dbczumar)- [Tracking] Enhance async logging functionality by ensuring flush is called on
Futuresobjects (#10715,@chenmoneygithub)- [Tracking] Add support for a non-interactive mode for the
login()API (#10623,@henxing)- [Scoring] Allow MLflow model serving to support direct
dictinputs with themessageskey (#10742,@daniellok-db,@B-Step62)- [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765,
@gabrielfu)- [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473,
@prithvikannan)- [UI] Introduce a number of new features for the MLflow UI (#10864,
@daniellok-db)- [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655,
@daniellok-db)- [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531,
@gabrielfu)Bug fixes
- [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778,
@BenWilson2)- [Models] Add support for paddle 2.6.0 release (#10757,
@WeichenXu123)- [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839,
@BenWilson2)- [Recipes] Fix an issue with MLflow Recipes card display format (#10893,
@WeichenXu123)- [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729,
@daniellok-db)
... (truncated)
Changelog
Sourced from mlflow's changelog.
2.10.0 (2024-01-26)
MLflow 2.10.0 includes several major features and improvements
In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).
Our biggest features this release are:
We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.
Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.
LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.
The MLflow
transformersflavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.
Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!
Features:
- [Models] Introduce
ObjectsandArrayssupport for model signatures (#9936,@serena-ruan)- [Models] Support saving prompt templates for transformers (#10791,
@daniellok-db)- [Models] Enhance the MLflow Models
predictAPI to serve as a pre-logging validator of environment compatibility. (#10759,@B-Step62)- [Models] Add support for Image Classification pipelines within the transformers flavor (#10538,
@KonakanchiSwathi)- [Models] Add support for retrieving and storing license files for transformers models (#10871,
@BenWilson2)- [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603,
@C-K-Loan)- [Models] Automatically convert OpenAI input messages to LangChain chat messages for
pyfuncpredict (#10758,@dbczumar)- [Tracking] Add support for Langchain autologging (#10801,
@serena-ruan)- [Tracking] Enhance async logging functionality by ensuring flush is called on
Futuresobjects (#10715,@chenmoneygithub)- [Tracking] Add support for a non-interactive mode for the
login()API (#10623,@henxing)- [Scoring] Allow MLflow model serving to support direct
dictinputs with themessageskey (#10742,@daniellok-db,@B-Step62)- [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765,
@gabrielfu)- [Deployments] Add the ability to set rate limits on configured endpoints within the MLflow deployments server API (#10779,
@TomeHirata)- [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473,
@prithvikannan)- [UI] Introduce a number of new features for the MLflow UI (#10864,
@daniellok-db)- [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655,
@daniellok-db)- [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531,
@gabrielfu)Bug fixes:
- [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778,
@BenWilson2)- [Models] Add support for paddle 2.6.0 release (#10757,
@WeichenXu123)- [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839,
@BenWilson2)- [Recipes] Fix an issue with MLflow Recipes card display format (#10893,
@WeichenXu123)- [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729,
@daniellok-db)- [Scoring] Fix an issue where SageMaker tags were not propagating correctly (#9310,
@clarkh-ncino)- [Windows / Databricks] Fix an issue with executing Databricks run commands from within a Window environment (#10811,
@wolpl)- [Models / Databricks] Disable
mlflowdbfsmounts for JohnSnowLabs flavor due to flakiness (#9872,@C-K-Loan)
... (truncated)
Commits
628fba4Fix azure openai and docs (#10894)ebb8b2dRunpython3 dev/update_mlflow_versions.py pre-release ...(#10909)10d79a7Runpython3 dev/update_ml_package_versions.py(#10907)67090c8Runpython3 dev/update_pypi_package_index.py(#10905)d1ee9c9Runpython3 dev/update_requirements.py --requirements-...(#10906)211fbc7Fix langchain test (#10901)97e85f9Revert "Implement promptflow model flavor (#10104)" (#10903)c438237Fix recipe card display format (#10893)b7c0b77Fixed theKeyError: 'loss'bug for the Quickstart guidline (#10886)6e5ec77Add unit tests for Docker image building and refactor (#10876)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
Hi @dependabot[bot]. Thanks for your PR.
I'm waiting for a SeldonIO or todo member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.
Superseded by #5608.