yocto-gl
yocto-gl copied to clipboard
[BUG] mlflow.lopenai.log_model tasks unsupported
Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the issues policy
Where did you encounter this bug?
Databricks
Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
MLflow version
- mlflow 2.9.2
- mlflow-sklinny 2.8.1
System information
- Standard_DS3_v2
- DBR 14.2 ML
Describe the problem
Am able to run the same code locally and have it work. Trying to run a mlflow.openai.log_model in a Databricks notebook and getting error MlflowException: Unsupported task type: <class 'openai.lib._old_api.APIRemovedInV1Proxy'>
Get messages that both openai.ChatCompletions and openai.Completions tasks are unsupported
Tracking information
REPLACE_ME
Code to reproduce issue
import openai
system_prompt = (
"You are a helpful assistant."
)
mlflow.log_param("system_prompt", system_prompt)
# Evaluate the model on some example questions
questions = pd.DataFrame(
{
"questions": [
"Who won the Super Bowl?"
],
}
)
logged_model = mlflow.openai.log_model(
model="gpt-3.5-turbo-1106",
task=openai.ChatCompletion,
artifact_path="model",
context = query(questions),
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"The user questions is:{questions} \n The context:{query(questions)}"},
],
)
mlflow.evaluate(
model=logged_model.model_uri,
model_type="question-answering",
data=questions,
)
# Load and inspect the evaluation results
results: pd.DataFrame = mlflow.load_table(
"eval_results_table.json", extra_columns=["run_id", "params.system_prompt"]
)
print("Evaluation results:")
print(results)
Stack trace
File /databricks/python/lib/python3.10/site-packages/mlflow/openai/__init__.py:571, in log_model(model, task, artifact_path, conda_env, code_paths, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, metadata, **kwargs)
479 @experimental
480 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name=FLAVOR_NAME))
481 def log_model(
(...)
494 **kwargs,
495 ):
496 """
497 Log an OpenAI model as an MLflow artifact for the current run.
498
(...)
569
570 """
--> 571 return Model.log(
572 artifact_path=artifact_path,
573 flavor=mlflow.openai,
574 registered_model_name=registered_model_name,
575 model=model,
576 task=task,
577 conda_env=conda_env,
578 code_paths=code_paths,
579 signature=signature,
580 input_example=input_example,
581 await_registration_for=await_registration_for,
582 pip_requirements=pip_requirements,
583 extra_pip_requirements=extra_pip_requirements,
584 metadata=metadata,
585 **kwargs,
586 )
File /databricks/python/lib/python3.10/site-packages/mlflow/models/model.py:619, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, **kwargs)
613 if (
614 (tracking_uri == "databricks" or get_uri_scheme(tracking_uri) == "databricks")
615 and kwargs.get("signature") is None
616 and kwargs.get("input_example") is None
617 ):
618 _logger.warning(_LOG_MODEL_MISSING_SIGNATURE_WARNING)
--> 619 flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs)
620 mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path)
621 try:
File /databricks/python/lib/python3.10/site-packages/mlflow/openai/__init__.py:371, in save_model(model, task, path, conda_env, code_paths, mlflow_model, signature, input_example, pip_requirements, extra_pip_requirements, metadata, **kwargs)
369 _validate_and_prepare_target_save_path(path)
370 code_dir_subpath = _validate_and_copy_code_paths(code_paths, path)
--> 371 task = _get_task_name(task)
373 if mlflow_model is None:
374 mlflow_model = Model()
File /databricks/python/lib/python3.10/site-packages/mlflow/openai/__init__.py:181, in _get_task_name(task)
179 return _class_to_task(task)
180 else:
--> 181 raise mlflow.MlflowException(
182 f"Unsupported task type: {type(task)}", error_code=INVALID_PARAMETER_VALUE
183 )
Other info / logs
REPLACE_ME
What component(s) does this bug affect?
- [ ]
area/artifacts
: Artifact stores and artifact logging - [ ]
area/build
: Build and test infrastructure for MLflow - [ ]
area/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrations - [ ]
area/docs
: MLflow documentation pages - [ ]
area/examples
: Example code - [ ]
area/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registry - [X]
area/models
: MLmodel format, model serialization/deserialization, flavors - [ ]
area/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templates - [ ]
area/projects
: MLproject format, project running backends - [ ]
area/scoring
: MLflow Model server, model deployment tools, Spark UDFs - [ ]
area/server-infra
: MLflow Tracking server backend - [ ]
area/tracking
: Tracking Service, tracking client APIs, autologging
What interface(s) does this bug affect?
- [ ]
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev server - [ ]
area/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [ ]
area/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ]
area/windows
: Windows support
What language(s) does this bug affect?
- [ ]
language/r
: R APIs and clients - [ ]
language/java
: Java APIs and clients - [ ]
language/new
: Proposals for new client languages
What integration(s) does this bug affect?
- [ ]
integrations/azure
: Azure and Azure ML integrations - [ ]
integrations/sagemaker
: SageMaker integrations - [X]
integrations/databricks
: Databricks integrations
What version of the openai SDK are you installing?
@BenWilson2 I didn't specify a specific version but it looks like it installs 1.6.1 but I was able to fix the error by using task=chat.completions
which is the new way to use the OpenAI client as opposed to the task=openai.ChatCompletions
that is shows in the MLflow documentation.
There are a couple places in the docs that show examples with openai.ChatCompletions, I can put in a PR if we want all of them updated? or even better update the error message to explain to use either task=openai.ChatCompletions
or task=chat.completions
.
@mlflow/mlflow-team Please assign a maintainer and start triaging this issue.
For further information, getting the same error in mutliple versions of MLFlow and OpenAI - I think the docs definitely need updated with a working example and clear package version for the examples.
Thanks for the updates here. We're going to be updating the implementation to support only OpenAI SDK>1.0 soon and should be part of the 2.11 release.
i think it is outdated version of the openAI API library .APIRemovedInV1Proxy class removed from the new version I guess.
I'm having the same issue with task=openai.ChatCompletion.
@cdreetz - Did you mean change to task=openai.chat.completions? You stated that using task=chat.completions worked for you but I do not see how since that is not part of the class. Unfortunately when I try task=openai.chat.completions I get the following:
Exception has occurred: MlflowException
Unsupported task type: <class 'openai.resources.chat.completions.Completions'>
File "C:\Users\p0064107\OneDrive - Parsons Corp\Documents\AI\Python Code\mlFlow.py", line 31, in
I found in the OpenAI v1.0.0 Migration Guide documentation showing the below change but have not had any success with it.
from openai import OpenAI
openai.ChatCompletion.create() -> client.chat.completions.create()
Thoughts anyone?
Thanks Chris
I'm experiencing the same issue: when I try to call openai.chat.completions
, I get the same error as @15890cle; when I call openai.chat.completions.create, it returns MlflowException: Unsupported task type: <class 'method'>
.
Can anyone kindly advise on how to overcome this hurdle?
Thanks.
@15890cle at the time I was working in Databricks and am now trying it locally and it's not working with the openai.chat.completions that I got working in the Databricks notebook. I'm assuming its a problem with the versions I have installed currently but whatever version I was using at the time specifically had openai.chat.completions. Going to go back and see if I can figure it out and let you know. Even looking at the past couple diffs I don't see anything that would have made the openai.chat.completions not work so not sure yet
@15890cle @gabriel-salgueiro-ey to use the new chat.completions it will be task = "chat.completion"
instead of task = openai.ChatCompletion
Is this resolved and how?
@suyogkute it is resolved by adding support for the OpenAI SDK 1.x, enabling MLflow to conform to the new means of specifying chat endpoints. This enhancement will be released in MLflow 2.11.