inference-server topic
BMW-YOLOv4-Inference-API-GPU
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
BMW-YOLOv4-Inference-API-CPU
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
BMW-TensorFlow-Inference-API-CPU
This is a repository for an object detection inference API using the Tensorflow framework.
pinferencia
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
ai-serving
Serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints
k3ai
K3ai is a lightweight, fully automated, AI infrastructure-in-a-box solution that allows anyone to experiment quickly with Kubeflow pipelines. K3ai is perfect for anything from Edge to laptops.
gpu-rest-engine
A REST API for Caffe using Docker and Go
orkhon
Orkhon: ML Inference Framework and Server Runtime
fastDeploy
Deploy DL/ ML inference pipelines with minimal extra code.
Server
A standalone inference server for trained Rubix ML estimators.