MLServer
MLServer copied to clipboard
Python client library for mlflow server
Hello everyone,
I was exploring using mlserver to deploy ML models as a REST service. I noticed an issue: if you plan to use mlserver with Python and want to utilize its codecs (like numpy or pandas codecs), you must include mlserver in your codebase as a dependency. This action introduces many transitive dependencies, such as fastapi, aiokafka, uvicorn, etc., which significantly increases the size of the dependencies. Would it not be more practical to have a separate mlserver-client package that exclusively contains the codecs and types?
Or how do you currently integrate mlserver with another microservice? Do you manually create the Open Inference Protocol v2 JSON?
Has this ever been explored since? We're currently maintaining an internal package that just contains a few key codecs, settings and errors we use, but an official modularization would be much nicer.