amazon-sagemaker-examples icon indicating copy to clipboard operation
amazon-sagemaker-examples copied to clipboard

Real-time Batch Inference with Torchserve - Number of batch response mismatched

Open vinayak-shanawad opened this issue 3 years ago • 0 comments

Hi Team,

Greetings!!

I tried going through following notebook to understand how real-time batch inference works, but it's returning this error - "number of batch response mismatched". Could you please have a look?

Notebook: https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/pytorch_batch_inference/sagemaker_batch_inference_torchserve.ipynb

Error description: MODEL_LOG - model: model, number of batch response mismatched, expect: 3, got: 1.

Thanks

vinayak-shanawad avatar Aug 03 '22 11:08 vinayak-shanawad