Deploying models with seldon-core
@DavidGOrtega suggested on slack:
Hey guys. Reviewing OpenMlOps they are exposing the models via Seldon and Ambassador API Gateway. This is similar to the QuaroML stack that I did and a very convenient and easy way to expose and of course scale the models. Maybe we can introduce MLEM inside the TPI in conjunction with Consul
We need to get back to this discussion to shape the vision for MLEM deployment part. It's better to do as soon as we are finished with the closed alpha release.
Maybe we can introduce MLEM inside the TPI in conjunction with Consul
@DavidGOrtega, I'm getting back to your comment since @mike0sv is working on Deployment 2.0 in MLEM. Could you please elaborate on your thought? How do you see this?
Beside that, could MLEM integrate with TPI to deploy to AWS EC2 or GCP?
@casperdcl @0x2b3bfa0 @DavidGOrtega
Also, we're now implementing deployment to Sagemaker. To do that e2e we need to provision some AWS resources for the user (Roles, Policies, etc). Can we use TPI to do that? Or do you have plans to implement an option to provision this?
Looks like 3 different feature requests:
- "exposing the models via Seldon and Ambassador API Gateway [...] similar to the QuaroML stack [...] introduce MLEM inside the TPI in conjunction with Consul"
- not sure I follow. Is this a new cloud feature request on the TPI repo @DavidGOrtega?
- "MLEM integrate with TPI to deploy to AWS EC2 or GCP"
- sure, TPI can: provision an instance, upload workdir, run a script on the instance, auto-recover from spot interruptions (restores wokrdir & re-runs script), auto-cleanup on script exit, download workdir.
- requirement: provide cloud (AWS/Azure/GCP) credentials via environment variables^1
- you can use docker^2
- you can use avoid having to config/expose service ports on AWS by using a (free) port forwarding service^3
- you can use the Python wrapper (
pip install tpi) which auto-detects OS, downloads/cachesterraformbinaries, installs TPI, and even has a Python API so you don't explicitly run CLI
- "deployment to Sagemaker. To do that e2e we need to provision some AWS resources for the user (Roles, Policies, etc)"
- sounds like TPI's
permission_set? - see also https://github.com/iterative/terraform-provider-iterative/tree/master/docs/guides/permissions (mentioned in ^1) for a list of permissions
- FYI about spot instances on Sagemaker, looks like they're only supported for training and not really for serving
- sounds like TPI's