MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

where InferenceRequest init?

Open lanzhiwang opened this issue 1 year ago • 0 comments

I followed this documentation https://mlserver.readthedocs.io/en/stable/examples/sklearn/README.html After running mlserver, I can get the correct results according to this operation.

I took a look at the source code of mlserver and registered the relevant routes in the source code.

         APIRoute(
             "/v2/models/{model_name}/versions/{model_version}/infer",
             endpoints.infer,
             methods=["POST"],
         ),

After the client sends the http request, it also calls the endpoints.infer method. In the endpoints.infer method, the model is also called to generate the final result.

I have a confusion as follows:

async def infer(
         self,
         raw_request: Request,
         raw_response: Response,
         payload: InferenceRequest,
         model_name: str,
         model_version: Optional[str] = None,
     ) -> InferenceResponse:

There is a parameter payload of type InferenceRequest in the endpoints.infer method. This parameter represents the user’s input.

What I want to ask is where in the code the body content of http is instantiated into an InferenceRequest object. I don’t seem to find the relevant part in the code. May I ask where this logic is implemented?

lanzhiwang avatar Mar 27 '24 10:03 lanzhiwang