amazon-sagemaker-examples
amazon-sagemaker-examples copied to clipboard
Real-time Batch Inference with Torchserve - Number of batch response mismatched
Hi Team,
Greetings!!
I tried going through following notebook to understand how real-time batch inference works, but it's returning this error - "number of batch response mismatched". Could you please have a look?
Notebook: https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/pytorch_batch_inference/sagemaker_batch_inference_torchserve.ipynb
Error description: MODEL_LOG - model: model, number of batch response mismatched, expect: 3, got: 1.
Thanks