Sherif Akoush

Results 65 comments of Sherif Akoush

This can also happen if we have 2 models (inference model and explainer model) deployed on one instance, if this instance dies and these 2 models gets rescheduled there is...

> This can also happen if we have 2 models (inference model and explainer model) deployed on one instance, if this instance dies and these 2 models gets rescheduled there...

> There is a bug where the expected number of model replicas does not update after change in the number of server replicas. Will fix that part first before merging...

@Phyks Thanks for your reporting this issue. I think it is better to fix the underlying issue with the import. We welcome contributions if you are happy to provide a...

@rachidrebik Thanks for your message. Streaming support in MLServer doesnt work with adaptive batching for now, hence you get this warning. Therefore using `predict_stream` should revert back to non batching...

@rachidrebik if you provide a set of instructions to reproduce the issue you are raising we can take a look further.

@rachidrebik Thanks, yes we can replicate the issue and we will look into it in the next few weeks.

Until we look into it, maybe try older versions of mlserver as it might be a bug that was introduced recently.

@rachidrebik having looked more into this issue now, We think that it is likely that your model is not batch enabled. Consider a model that takes `[b, n]` elements as...