Adrian Gonzalez-Martin

Results 166 comments of Adrian Gonzalez-Martin

Hey @BFAnas , Similarly to how you can ask MLServer to treat the whole request as a multi-column dataframe, you can also specify that some of your columns are of...

Right, got it. I think this could be a useful feature. Would you mind changing the issue title to something along the lines of "Selecting column as index for pandas...

Sure thing @BFAnas , that title sounds great to me! MLServer speaks the [V2 Inference Protocol](https://kserve.github.io/website/0.7/modelserving/inference_api/), so I expect the solution will be along the lines of passing the index...

Hey @pablobgar, Most of these things apply to both gRPC and REST, right? As a user, I would expect to set a SSL cert once and then have that applied...

Leaving aside the usability aspects (i.e. making it easier to use for the user), my main concern is that giving full access to the underlying Uvicorn / FastAPI / gRPC...

Hey @PaulEdwardBrennan , This would involve adding a lock file which specifies the exact deps and subdeps that are shipped within each MLServer image. This ensures reproducibility, as well as...

Good point! Let me pull that into `v1.1.x`.

This could be a good chance to explore OpenTelemetry for tracing (and metrics?)

To add on top to what @agrski mentioned, we're currently working on exploring other architectures to allow inference to run in parallel across multiple models ( https://github.com/SeldonIO/MLServer/issues/434). We `spawn` each...

Hey @MarcinSkrobczynski, We've recently released MLServer `1.1.0`, which includes a number of improvements around parallel inference, particularly around memory usage. It would be great if you could give that a...