PaddleCloud icon indicating copy to clipboard operation
PaddleCloud copied to clipboard

[Feature] Add serve command to start a serverless predict serving port

Open typhoonzero opened this issue 8 years ago • 2 comments

Can run `paddlecloud serve -model-path xxx -scale 100 -cpu 1 -memory 8Gi -entry "infer.py" to start a serverless URL endpoint for serve the model.

typhoonzero avatar Sep 14 '17 12:09 typhoonzero

Which inferance type we support? Online or offline? If we setup an online serve, we also need to add an ingress rule for the submited serving.

Yancey0623 avatar Sep 15 '17 02:09 Yancey0623

Online of course.

Offline inference can use the same method as training.

typhoonzero avatar Sep 15 '17 03:09 typhoonzero