Requesting model registry python client to expose create() calls for register_model, model_version, model_artifacts etc.
Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] As a QE, I need to be able to test both positive and negative scenarios around model registry workflows. Current methods utilizes upsert* calls and prevents me from being able to utilize python client for my test code. Describe the solution you'd like A clear and concise description of what you want to happen. Please expose the POST and PATCH calls for various model registry resources, so that one can write automation around both positive and negative path. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here. Logging this based on my conversation with @syntaxsdev
Is OpenAPI generated code for Python sufficient to use here for API level calls?
@rareddy yes, we discussed we might just expose for direct calls
Thanks @syntaxsdev what I was wondering is this not enough https://github.com/kubeflow/model-registry/blob/main/clients/python/src/mr_openapi/api/model_registry_service_api.py?
@rareddy it is. That's what Debarati is asking for that to be exposed to the client lib for testing.
Thanks for the discussion on this. I'd like to emphasize the importance of ensuring that our public-facing APIs are designed and maintained for their intended users, and that our testing methodologies match those APIs, rather than "bending" the APIs to fit testing needs.
Modifying an external contract for internal testing purposes can sometimes mask problems or create a divergence between how we test and how the API is actually used. Ideally, our testing should reflect real-world usage patterns and validate the stability and correctness of the established public interface. If we find a particular public API difficult to test, it may indicate a need to refine our testing strategies or tools for that API, rather than changing the API itself -- my 2 cents
our testing should reflect real-world usage patterns and validate the stability and correctness of the established public interface
fully agree, that is why I'm insisting the MR py client existing user-facing methods are covered. We do here, we do d/s. I think we need confirmation from @dbasunag and others we're not going to remove the MR py client user method testing.
If we find a particular public API difficult to test, it may indicate a need to refine our testing strategies or tools for that API, rather than changing the API itself -- my 2 cents
Again, fully agree, the point I've suggested is that rather than re-doing the REST call in pytest "manually" (using requests), I've suggested to make re-use of the openapi-codegen generated REST client, which is wrapped by the MR py client.
This way, rather than diverging (we get the MR py client using the openapi-codegen client, the d/s using hand-crafted requests based call) to use the same, this way we increase coverage of the code leveraged by the MR py client (as a "side effect")
Hope that clarifies?
our testing should reflect real-world usage patterns and validate the stability and correctness of the established public interface
fully agree, that is why I'm insisting the MR py client existing user-facing methods are covered. We do here, we do d/s. I think we need confirmation from @dbasunag and others we're not going to remove the MR py client user method testing.
If we find a particular public API difficult to test, it may indicate a need to refine our testing strategies or tools for that API, rather than changing the API itself -- my 2 cents
Again, fully agree, the point I've suggested is that rather than re-doing the REST call in pytest "manually" (using
requests), I've suggested to make re-use of the openapi-codegen generated REST client, which is wrapped by the MR py client. This way, rather than diverging (we get the MR py client using the openapi-codegen client, the d/s using hand-craftedrequestsbased call) to use the same, this way we increase coverage of the code leveraged by the MR py client (as a "side effect")Hope that clarifies?
Ah, thank you for the detailed clarification! I apologize for the misunderstanding on my part.
I now see your point. I had mistakenly interpreted the suggestion as potentially surfacing new, raw methods from the openapi-codegen directly into the public API of the MR py client.
Your actual proposal – to leverage the existing openapi-codegen generated REST client (which the MR py client already internally wraps) for writing tests, rather than making direct requests calls in pytest – makes perfect sense.
I think we need confirmation from @dbasunag and others we're not going to remove the MR py client user method testing.
Yes from my talk from @dbasunag we are not going to change the existing public facing tests or functionality.
I believe the intended use case is just going to expose the generated REST client in a way that allow for additional testing directly on the MR.
I am not requesting changes to existing calls in python client. Instead I am requesting exposing calls that would help QE use case (all creates etc.), so that we don't need to use rest calls to test model registry in d/s testing.
We might not need to expose the client,
We might be able to just use/do this:
mr = ModelRegistry(...)
raw_client_fnc = mr._api.get_client
async with raw_client_fnc() as raw_client:
raw_client.create_registered_model(...) # <-- this is now the raw generated client
A quick test reveals as such:
I believe we can close this out :) @tarilabs @dbasunag