azure-sdk-for-python
azure-sdk-for-python copied to clipboard
Get scoring script from Azure ML Datastores
- Package Name: azure.ai.ml
- Package Version: 1.21.1
- Operating System: Ubuntu 22.04
- Python Version: 3.10
Describe the bug According to the documentation, when you create a batch deployment it is possible to specify a scoring script from a local path, or "http:", "https:", or "azureml:". In my case when I specify it from a local path works correctly, but when I a specify a remote location does not work.
To Reproduce Steps to reproduce the behavior:
Instead of use a local path when you deploy a batch endpoint use a path from ML datastores or blob storage.
deployment = ModelBatchDeployment(
name=model.split('.')[0].replace('_', '-') + "-1",
endpoint_name=model.split('.')[0].replace('_', '-').lower(),
model=Model(path="azureml://datastores/models/paths/" +
model, type=AssetTypes.CUSTOM_MODEL, name=model.split('.')[0]),
code_configuration=CodeConfiguration(
code="azureml://datastores/inference/paths/", scoring_script="inference.py"),
environment=Environment(
name="deploy-env",
conda_file="../environment/inference-conda.yml",
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest"
),
compute="cpu-cluster",
settings=ModelBatchDeploymentSettings(
max_concurrency_per_instance=2,
mini_batch_size=10,
instance_count=2,
output_action=BatchDeploymentOutputAction.APPEND_ROW,
output_file_name="predictions.csv",
retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
logging_level="info"
)
)
ml_client.begin_create_or_update(deployment).result()
Screenshots
Thank you for the feedback @mlopezfernandez . We will investigate and get back to you asap.
cc @azureml-github
Any update on this? Still seems to be failing.