serveit
                                
                                
                                
                                    serveit copied to clipboard
                            
                            
                            
                        Simple API serving for Python ML models
ServeIt
ServeIt lets you serve model predictions and supplementary information from a RESTful API using your favorite Python ML library in as little as one line of code:
from serveit.server import ModelServer
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
# fit logistic regression on Iris data
clf = LogisticRegression()
data = load_iris()
clf.fit(data.data, data.target)
# initialize server with a model and start serving predictions
ModelServer(clf, clf.predict).serve()
Your new API is now accepting POST requests at localhost:5000/predictions! Please see the examples directory for detailed examples across domains (e.g., regression, image classification), including live examples.
Features
Current ServeIt features include:
- Model inference serving via RESTful API endpoint
 - Extensible library for inference-time data loading, preprocessing, input validation, and postprocessing
 - Supplementary information endpoint creation
 - Automatic JSON serialization of responses
 - Configurable request and response logging (work in progress)
 
Supported libraries
The following libraries are currently supported:
- Scikit-Learn
 - Keras
 - PyTorch
 
Installation: Python 2.7 and Python 3.6
Installation is easy with pip: pip install serveit
Building
You can build locally with: python setup.py
License
MIT
Please consider buying me a coffee if you like my work:
