ormb icon indicating copy to clipboard operation
ormb copied to clipboard

[feasibility research] Investigate if we can get signature without the model server

Open gaocegege opened this issue 4 years ago • 10 comments

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:

When we run the model conversion jobs, we have to setup a real model inference server first, which may be not necessary. We should investigate if we can get it directly similar to savedmodel_cli or some other tools.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

gaocegege avatar Jun 14 '20 11:06 gaocegege

/cc @simon-cj

gaocegege avatar Jun 14 '20 13:06 gaocegege

emmm, except TensorRT and PMML, others is verified, they can be extract directly, PMML is ok in theory. For TenorRT, I need analysis further.

simon-cj avatar Jun 15 '20 01:06 simon-cj

OK, we can sync the progress here.

gaocegege avatar Jun 15 '20 01:06 gaocegege

Do we need to get the signature for TRT plan? I think it is only used for UI. If we cannot do it without running the model server, can we claim that we do not support TRT plan signature extraction?

Same question for PMML.

gaocegege avatar Jun 15 '20 09:06 gaocegege

Do we need to get the signature for TRT plan? I think it is only used for UI. If we cannot do it without running the model server, can we claim that we do not support TRT plan signature extraction?

Same question for PMML.

PMML need to get the signature to extract params, eg: model inputs and outputs, but TRT is not clear, it need to discuss in clever 1.7.0, it is planning in clever 1.7.0.

simon-cj avatar Jun 16 '20 01:06 simon-cj

SGTM

gaocegege avatar Jun 16 '20 01:06 gaocegege

@simon-cj Is there any progress? I did not see the logic about extracting signatures from TRT plan. Then can we claim that we do not need run a model server to extract signatures?

gaocegege avatar Jul 16 '20 04:07 gaocegege

model inference server mean trtis (triton) ? TRT plan has has some constraints:

Note: The generated plan files are not portable across platforms or TensorRT versions. Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU. https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html

So if we want to extract its signatures,we should have the specific environment .

for binding in engine: 
    print ('Is INPUT:', engine.binding_is_input( binding ),\
           'DIMS:', binding,engine.get_binding_shape( binding ),\
           'DTYPE:', trt.nptype(engine.get_binding_dtype( binding )))

- - - - - - - - - -
OUTPUT:
Is INPUT: True DIMS: data (3, 224, 224) DTYPE: <class 'numpy.float32'>
Is INPUT: False DIMS: mobilenetv20_output_flatten0_reshape0 (1000,) DTYPE: <class 'numpy.float32'>

judgeeeeee avatar Jul 28 '20 07:07 judgeeeeee

/assign @simon-cj

Is there any update?

gaocegege avatar Aug 05 '20 02:08 gaocegege

/assign @judgeeeeee You implement the extract scripts, after that, I will integration with klever-model-registry

simon-cj avatar Aug 06 '20 06:08 simon-cj