sagemaker-spark
sagemaker-spark copied to clipboard
[Feature Request] Distributed inference with local mode
Please fill out the form below.
System Information
- Spark or PySpark: Spark
- SDK Version: N/A
- Spark Version: N/A
- Algorithm (e.g. KMeans): N/A
Describe the problem
Currently Python SageMaker SDK supports local mode.
import numpy
from sagemaker.mxnet import MXNetModel
model_location = 's3://mybucket/my_model.tar.gz'
code_location = 's3://mybucket/sourcedir.tar.gz'
image_url = get_image_uri(sess.boto_region_name, 'image', repo_version="latest")
s3_model = MXNetModel(model_data=model_location, role='SageMakerRole', image=image_url,
entry_point='mnist.py', source_dir=code_location)
predictor = s3_model.deploy(initial_instance_count=1, instance_type='local')
data = numpy.zeros(shape=(1, 1, 28, 28))
predictor.predict(data)
# Tear down the endpoint container and delete the corresponding endpoint configuration
predictor.delete_endpoint()
# Deletes the model
predictor.delete_model()
And right now this SDK forces us to create endpoint when we define model.
Is there any plan to support local mode for inferencing?
Hi @miaekim,
sagemaker-spark does not provide hosting/inference in local mode or other SageMaker services in local mode like Python SDK.
Got it. Please update this issue when you plan to implement it :)