mlem
mlem copied to clipboard
🐶 A tool to package, serve, and deploy any ML model on any platform. Archived to be resurrected one day🤞
For computer vision models, it would be very handy to support image paths as input: ``` mlem apply visionmodel image.png -i --it image[tensorflow] -o prediction ``` Both TensorFlow and Pytorch...
Currently, we need to manually approve each commit in PR from fork to be run in CI. This adds a lag into getting valuable feedback for external contributors. Suggestion: split...
### Context A bug was introduced in https://github.com/iterative/mlem/pull/221, where approach to combining prediction results incorrectly assumed type to be numpy array. This PR utilizes `DatasetAnalyzer` to first determine the type...
It would be a nice feature to allow to serve MLEM models through https://developer.nvidia.com/nvidia-triton-inference-server . Afaik, it supports all the frameworks currently supported by MLEM.
It would be a nice feature to allow to serve MLEM models through https://pytorch.org/serve/ . This is only suitable for PyTorch models so I am not sure if it's out...
- [x] If a subcommand was invoked, action should include parent command (e.g `deploy run`, not just `run`) - [x] Add telemetry to API - [ ] Report extensions used
MLEM could be a powerful tool if you need to distribute your model with different channels and use it in different circumstances (or easily switch between those). This could be...
As @AlexIoannides suggested, there should be a way to install requirements from model metadata. Something like ``` $ mlem pack my-model requirements catboost==1.0.0 numpy==2.0.0 $ mlem pack my-model requiments -c...