fastDeploy icon indicating copy to clipboard operation
fastDeploy copied to clipboard

Deploy DL/ ML inference pipelines with minimal extra code.

fastDeploy

Deploy DL/ ML inference pipelines with minimal extra code.

Installation:

pip install --upgrade fastdeploy

Usage:

# Invoke fastdeploy 
fastdeploy --help
# or
python -m fastdeploy --help

# Start prediction "loop" for recipe "echo_json"
fastdeploy --recipe ./echo_json --mode loop

# Start rest apis for recipe "echo_json"
fastdeploy --recipe ./echo_json --mode rest

# Auto genereate dockerfile and build docker image. --base is docker base
fastdeploy --recipe ./recipes/echo_json/ \
 --mode build_rest --base python:3.6-slim
# fastdeploy_echo_json built!

# Run docker image
docker run -it -p8080:8080 fastdeploy_echo_json

fastDeploy monitor

  • available on localhost:8080 (or --port)

  • https://github.com/notAI-tech/fastDeploy/blob/master/recipe.md Writing a recipe for your prediction script
  • https://github.com/notAI-tech/fastDeploy/blob/master/inference.md cURL and Python inference commands.