multi-model-server
multi-model-server copied to clipboard
Multi Model Server is a tool for serving neural net models for inference
in the case you given, the post url is unique, for example: http://192.168.12.111:8001/predictions/live_people If I want to implement different function in a docker. that is use same ip and port,only...
Per AWS documentation[1], the environment variable to decide if the container is running in Multi Model mode is SAGEMAKER_MULTI_MODEL, while in Ping.java the environment variable read is SAGEMAKER_MULTI_MODE[2] [1] -...
OS: KDE Neon 20.04 (Ubuntu 20.04 based) Python: 3.8 To convert an ONNX model to the `.mar` format, I followed the [following guide](https://github.com/awslabs/multi-model-server/blob/master/model-archiver/docs/convert_from_onnx.md). However, when trying to convert the model...
I want to run the MMS docker container serving the ArcFace-ResNet100 model. So I ran the following command to use the archived model that exists in the model zoo. `docker...
## Issue #, if available: ## Description of changes: This changes is to remove the lock in function pollBatch(). The reason is that each WorkerThread calls pollBatch to get a...
Before or while filing an issue please feel free to join our [ slack channel](https://join.slack.com/t/mms-awslabs/shared_invite/enQtNDk4MTgzNDc5NzE4LTBkYTAwMjBjMTVmZTdkODRmYTZkNjdjZGYxZDI0ODhiZDdlM2Y0ZGJiZTczMGY3Njc4MmM3OTQ0OWI2ZDMyNGQ) to get in touch with development team, ask questions, find out what's cooking and more!...
The tutorial [here ](https://github.com/awslabs/multi-model-server/blob/master/model-archiver/README.md#creating-a-model-archive) indicates that we should download squeezenet like thus : curl -o squeezenet/squeezenet_v1.1-symbol.json https://s3.amazonaws.com/model-server/model_archive_1.0/examples/squeezenet_v1.1/squeezenet_v1.1-symbol.json Running that command and opening the file shows this in squeezenet_v1.1-symbol.json : NoSuchBucketThe...
The slack channel link [here](https://github.com/awslabs/multi-model-server/blob/master/README.md#serve-a-model) isn't valid anymore.
preload_model=true tensor_gpu-inl.h:35: Check failed: e == cudaSuccess: CUDA: initialization error