fastapi-ml
fastapi-ml copied to clipboard
deploying an ML model to Heroku with FastAPI
Deploying and Hosting a Machine Learning Model with FastAPI and Heroku
Want to learn how to build this?
Check out the tutorial.
Want to use this project?
With Docker
-
Build and tag the Docker image:
$ docker build -t fastapi-prophet . -
Spin up the container:
$ docker run --name fastapi-ml -e PORT=8008 -p 8008:8008 -d fastapi-prophet:latest -
Train the model:
$ docker exec -it fastapi-ml python >>> from model import train, predict, convert >>> train() -
Test:
$ curl \ --header "Content-Type: application/json" \ --request POST \ --data '{"ticker":"MSFT"}' \ http://localhost:8008/predict
Without Docker
-
Create and activate a virtual environment:
$ python3 -m venv venv && source venv/bin/activate -
Install the requirements:
(venv)$ pip install -r requirements.txt -
Train the model:
(venv)$ python >>> from model import train, predict, convert >>> train() -
Run the app:
(venv)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008 -
Test:
$ curl \ --header "Content-Type: application/json" \ --request POST \ --data '{"ticker":"MSFT"}' \ http://localhost:8008/predict