guidance-for-machine-learning-inference-on-aws icon indicating copy to clipboard operation
guidance-for-machine-learning-inference-on-aws copied to clipboard

This Guidance demonstrates how to deploy a machine learning inference architecture on Amazon Elastic Kubernetes Service (Amazon EKS). It addresses the basic implementation requirements as well as ways...

Results 7 guidance-for-machine-learning-inference-on-aws issues
Sort by recently updated
recently updated
newest added

*What is the PR about This PR is for integrating TorchServe with this solution - Supports CPU & GPU - Tested with `./test.sh run bmk` From UX POV, User needs...

I am following the ReadMe and using all default settings. I get the following error after running `bash deploy.sh run` ``` Runtime: kubernetes Processor: graviton error: error validating "STDIN": error...

*Issue #, if available:* *Description of changes:* By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

The current version of Amazon Linux is Amazon Linux 2023, while the container images here are based on AL2. Using the latest Amazon Linux as an example is always good...

Getting the following error when I run the build script. I have the registry configured ``` #8 38.24 No package aws-neuron-runtime-base available. #8 38.48 No package aws-neuron-runtime available. #8 38.70...

I executed ./trace.sh according to README, but neuron runtime error has occered. Part of error log. ``` Question: What does the little engine say? 2022-Mar-29 07:07:29.0082 11:11 ERROR NRT:nrt_init Unable...

Sagemaker monitor can be enabled when deployed with Sagemaker endpoint. is there anyway to monitor the model with this proposed aws-do-inference? Thanks.