mlem
mlem copied to clipboard
Unique identifiers for what you create with `serve`
Serving models
-
Find the
apply-remoteAPI confusing. Couldmlem servereturn a unique ID or something easier to reference?mlem serve rf fastapi ... Created `fancy-id-rf` mlem apply-remote fancy-id-rf test_x.csv
reported by @daavoo
My thoughts: maybe this is even more useful if you serve many models simultaneously. This is similar to what you get with mlem deploy btw, there you get a declaration file that has a name, thus you can reference a deployment by name. We could extend this to also support building: each package that's built could have a unique ID. But these are different things:
- Deployments that exist outside of the system we use to run MLEM commands (and are "active", i.e. run right now)
- Servings that exist in that system (and are "active" also)
- Builder artifacts (aren't "active"), which just are stored on the disk so the approaches may be different also.
What you are describing IS deployment functionality. We have mlem deploy apply that is capable of exactly that. Both serve and apply-remote exist just because they are needed for deployment anyway and we expose them to give our users more flexibility.
I mean, if we do that for serve, we'll just duplicate deployments. Instead you just use local docker deployment. We can also add a dummy deployment that just runs serve in separate process. It will have name that you can use with mlem deploy apply
As for builds, I dont see any use for those IDs. If you build something to store on disk, just use path as ID. We don't version builds ourselves, so once you overwrite some build, you cant access previous version with or without ID unless you version it yourself. And in this case you need commit hash, not external ID
Good point regarding local docker deploy. I think that should be a way to go. Re builds: got it 👍
@daavoo do you have something to add here?
@daavoo do you have something to add here?
Just that the following:
Both serve and apply-remote exist just because they are needed for deployment anyway and we expose them to give our users more flexibility. I mean, if we do that for serve, we'll just duplicate deployments. Instead you just use local docker deployment.
Could then be better reflected on the docs? Semantics in the https://mlem.ai/doc/get-started/ seem a little confusing ambiguous to me, know that I know the points mentioned above.
@daavoo, could you please give a couple of examples? Don't think I understood your point.
Do you want to add a note about "deploying to docker locally" as an alternative to mlem serve? In the end of https://mlem.ai/doc/get-started/serving for example.
If so, I think we could just showcase "local docker deploy" in the end of Serve section. It should be pretty simple after we release new mechanics for deploy, cause it won't need any additional stuff like env declaration or deployment declaration.
@daavoo, could you please TAL at my question above?
@daavoo, could you please TAL at my question above?
Sorry, I miss the previous notification 😅 .
@daavoo, could you please give a couple of examples? Don't think I understood your point.
I just trying to say that if serve and apply-remote exist just because they are needed for deployment it feels confusing to me, as a user following the get-started, when I face 1 entire page about serving and the deploy word is not used a single time .
If so, I think we could just showcase "local docker deploy" in the end of Serve section. It should be pretty simple after we release new mechanics for deploy, cause it won't need any additional stuff like env declaration or deployment declaration.
I guess what I struggle with is understanding the motivation for a separate Serving page. If local deployment is an extended, more flexible way of doing the same thing, why not add a Local Deployment example in the existing Deployment page and remove Serving?
I somewhat disagree with serve and apply-remote exist just because they are needed for deployment. serve is a functionality I would use sometimes even without deployment, it's a standalone thing useful on its own, to my mind.
If local deployment is an extended, more flexible way of doing the same thing
To my mind, it's something between serving and deployment. Not sure where it would be better to attribute it. From the docs point of view, I think it should be part of Serving page (because serving happens on your machine, while deployment - on some PaaS, Heroku, Sagemaker, K8s, or provisioned ec2 machines - thanks to future integration with TPI).
Does it become more clear now? :)
Does it become more clear now? :)
Yes, thanks @aguschin.
Still feel that serving meaning in MLEM is kind of different compared to other contexts but I guess developers and naming opinions is just a never-ending story 😅
Ok, I've created a ticket in mlem.ai repo for this. Thank you for the idea!