azure-sdk-for-python
azure-sdk-for-python copied to clipboard
AzureML Kubernetes service inference script error when used with inference-schema decorators
I receive an error run() got an unexpected keyword argument (Screenshot Below) while testing Azure Kubernetes Service with UI. The service is deployed without any issues deployment.
The inference script works if
- I don't use the inference schema decorators. (I am using it to automatically generate a Swagger doc)
- I use Azure Container Instance instead of Kubernetes Service
My best guess is that there is a bug in using the inference-schema decorators with Kubernetes service. Can someone help with this issue?

Here is the inference script to regenerate the error Documentation Reference
import numpy as np
import pandas as pd
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
def init():
global model
# Replace filename if needed.
print('Model Initialized')
# providing 3 sample inputs for schema generation
numpy_sample_input = NumpyParameterType(np.array([[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]],dtype='float64'))
pandas_sample_input = PandasParameterType(pd.DataFrame({'name': ['Sarah', 'John'], 'age': [25, 26]}))
standard_sample_input = StandardPythonParameterType(0.0)
# This is a nested input sample, any item wrapped by `ParameterType` will be described by schema
sample_input = StandardPythonParameterType({'input1': numpy_sample_input,
'input2': pandas_sample_input,
'input3': standard_sample_input})
sample_global_parameters = StandardPythonParameterType(1.0) # this is optional
sample_output = StandardPythonParameterType([1.0, 1.0])
outputs = StandardPythonParameterType({'Results':sample_output}) # 'Results' is case sensitive
@input_schema('Inputs', sample_input)
# 'Inputs' is case sensitive
@input_schema('GlobalParameters', sample_global_parameters)
# this is optional, 'GlobalParameters' is case sensitive
@output_schema(outputs)
def run(Inputs, GlobalParameters):
# the parameters here have to match those in decorator, both 'Inputs' and
# 'GlobalParameters' here are case sensitive
try:
print(Inputs)
return ["Success"]
except Exception as e:
error = str(e)
return error
Thanks for the feedback, we’ll investigate asap.
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @azureml-github.
Issue Details
I receive an error run() got an unexpected keyword argument (Screenshot Below) while testing Azure Kubernetes Service with UI. The service is deployed without any issues deployment.
The inference script works if
- I don't use the inference schema decorators. (I am using it to automatically generate a Swagger doc)
- I use Azure Container Instance instead of Kubernetes Service
My best guess is that there is a bug in using the inference-schema decorators with Kubernetes service. Can someone help with this issue?

Here is the inference script to regenerate the error Documentation Reference
import numpy as np
import pandas as pd
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
def init():
global model
# Replace filename if needed.
print('Model Initialized')
# providing 3 sample inputs for schema generation
numpy_sample_input = NumpyParameterType(np.array([[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]],dtype='float64'))
pandas_sample_input = PandasParameterType(pd.DataFrame({'name': ['Sarah', 'John'], 'age': [25, 26]}))
standard_sample_input = StandardPythonParameterType(0.0)
# This is a nested input sample, any item wrapped by `ParameterType` will be described by schema
sample_input = StandardPythonParameterType({'input1': numpy_sample_input,
'input2': pandas_sample_input,
'input3': standard_sample_input})
sample_global_parameters = StandardPythonParameterType(1.0) # this is optional
sample_output = StandardPythonParameterType([1.0, 1.0])
outputs = StandardPythonParameterType({'Results':sample_output}) # 'Results' is case sensitive
@input_schema('Inputs', sample_input)
# 'Inputs' is case sensitive
@input_schema('GlobalParameters', sample_global_parameters)
# this is optional, 'GlobalParameters' is case sensitive
@output_schema(outputs)
def run(Inputs, GlobalParameters):
# the parameters here have to match those in decorator, both 'Inputs' and
# 'GlobalParameters' here are case sensitive
try:
print(Inputs)
return ["Success"]
except Exception as e:
error = str(e)
return error
| Author: | chinmaycpalande |
|---|---|
| Assignees: | SaurabhSharma-MSFT |
| Labels: |
|
| Milestone: | - |
Hello 👋 , Are there any updates/further investigations for this issue?
Thanks!
We're working on multiple fixes in this area for the UI. I'll use the sample above to check if these issues are fixed as well with these new changes
Hello 👋, Are there any further updates for this issue?
Thanks!
Hello, any feedback ? it's actually not just the UI, when i test it with the SDK it does not work as well, it's like the inference script does not consider the scond parameter 'GlobalParameters' . Any update please ?
Hello, it seems that the problem is with the GlobalParameters, try removing this input from both the decorators and function, and handle the cases inside the function. My aks started working after it. But it still do not explain why there is this problem.
Closing legacy issue.
Please consider upgrading to AzureML v2 CLI/SDK. https://learn.microsoft.com/en-us/azure/machine-learning/concept-v2