MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more

Results 304 MLServer issues
Sort by recently updated
recently updated
newest added
trafficstars

There are a few fields in the model metadata object that could make sense to let runtimes define defaults for. For example, the `SKLearnRuntime` could set the `platform` value to...

We currently allow users to define custom endpoints on their inference runtimes. However, these endpoints are only served through REST. This issue should explore mechanisms to also allow users to...

Following up from #167, there are a few things to take into account before adding support for custom endpoints across multiple models. At the moment we just load the route...

On top of #167, it would be great to extend the support for custom endpoints to gRPC calls as well. However, it's not clear at the moment whether this is...

It seems that [`buildpacks`](https://buildpacks.io/) offer an easy way to go from code to image. This could be leveraged by MLServer to ease the process of building custom inference runtimes.

ml-engineering

Create multi-language wrappers that can run Java, C++ and R models. For this, we can leverage the existing research in Seldon Core which leverages tools like JNI and PyBind to...

ml-engineering

Create an*"inference"** runtime that lets you run Alibi Detectors and Explainers.

ml-engineering

For more information, see theMLflow & MLServer design doc.

ml-engineering