MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

[Feature request] Test utilities for the HTTP server

Open gariepyalex opened this issue 3 years ago • 1 comments
trafficstars

Description

In the internal tests of MLServer, there is a fixture to create a fastapi.testclient.TestClient instance to a MLServer having all model loaded.

It would be quite practical to provide similar testing utilities to create a TestClient out-of-the-box. It would allow users to easily test REST endpoints, which is a crucial requirement especially for Custom Inference Runtimes.

Current workaround

Right now, it is quite difficult to write end-to-end tests calling the REST endpoints. Here is my current implementation:

import asyncio
from fastapi.testclient import TestClient
from mlserver import MLServer
from mlserver.cli.serve import load_settings
from mlserver.rest.server import RESTServer

@pytest.fixture(scope="module")
def rest_client() -> TestClient:
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)

    settings, model_settings = loop.run_until_complete(load_settings(MLSERVER_CONFIG_DIRECTORY))
    server = MLServer(settings)

    server._rest_server = RESTServer(
        settings=server._settings,
        data_plane=server._data_plane,
        model_repository_handlers=server._model_repository_handlers,
    )
    loop.run_until_complete(
        asyncio.gather(*[server._model_registry.load(model) for model in model_settings])
    )
    loop.close()

    return TestClient(server._rest_server._app)

There are many issue with the above code snippet:

  • It relies heavily on the internals of mlserver.server.
  • The functionload_settings is handy, but it is internal to the cli namespace
  • Since many internal functions are async, this adds additional complexity.

There may exist easier ways to create the TestClient, but it is the best solution I could come up with. The alternative would be to launch mlserver from a different process and to do long polling the health endpoint until the server is ready.

gariepyalex avatar Mar 25 '22 21:03 gariepyalex

Thanks for raising this one @gariepyalex . This is a great point.

Given that MLServer provides a sort of "framework" to write custom runtimes, it would make a lot of sense to also provide testing utilities for these custom runtimes.

adriangonz avatar Mar 29 '22 09:03 adriangonz