mrcnn_serving_ready icon indicating copy to clipboard operation
mrcnn_serving_ready copied to clipboard

sending your input data instances as a JSON object to deployed model

Open jrash33 opened this issue 3 years ago • 3 comments

hey @bendangnuksung ! Wow, this repo seriously saved my life, thank you so much. So using your repo, I have successfully deployed a mask rcnn model to gcp ai platform with no issues. But, for a couple weeks now, I have been hitting a road block on getting a prediction back. In other words, what's an example JSON object i can send that will work? here is the code i used to create the serving model:

def make_serving_ready(model_path, save_serve_path, version_number):
    import tensorflow as tf

    export_dir = os.path.join(save_serve_path, str(version_number))
    graph_pb = model_path

    builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

    with tf.gfile.GFile(graph_pb, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    sigs = {}
    
    # tf.import_graph_def(graph_model_def, name='', input_map={"input_image": img_uint8})
    with tf.Session(graph=tf.Graph()) as sess:
        # name="" is important to ensure we don't get spurious prefixing
        tf.import_graph_def(graph_def, name="")
        g = tf.get_default_graph()
        input_image = g.get_tensor_by_name("input_image:0")
        input_image_meta = g.get_tensor_by_name("input_image_meta:0")
        input_anchors = g.get_tensor_by_name("input_anchors:0")

        output_detection = g.get_tensor_by_name("mrcnn_detection/Reshape_1:0")
        output_mask = g.get_tensor_by_name("mrcnn_mask/Reshape_1:0")

        sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
            tf.saved_model.signature_def_utils.predict_signature_def(
                {"input_image": input_image},
#                 {"input_image": input_image, 'input_image_meta': input_image_meta, 'input_anchors': input_anchors},
#                 {"image_bytes": img_uint8, 'input_image_meta': input_image_meta, 'input_anchors': input_anchors},
                {"mrcnn_detection/Reshape_1": output_detection, 'mrcnn_mask/Reshape_1': output_mask})

        builder.add_meta_graph_and_variables(sess,
                                             [tag_constants.SERVING],
                                             signature_def_map=sigs)

    builder.save()
    print("*" * 80)
    print("FINISH CONVERTING FROZEN PB TO SERVING READY")
    print("PATH:", PATH_TO_SAVE_TENSORFLOW_SERVING_MODEL)
    print("*" * 80)

for example, i tried the JSON input below to just get any type of response with no luck:

{"instances":[
{"input_image":[[[[0.0],[0.5],[0.8]]]]},
{"input_image_meta":[[[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]]]
}
]}

please help!!

p.s. Going the extra mile: How would we be able to adjust the above function to accept b64 encoded images?? :)

jrash33 avatar Aug 05 '20 03:08 jrash33

Hi @jrash33 glad to hear this repo helped you. Coming to your question, there is already an inference RESTAPI code in inferencing/saved_model_inference.py inside there is a method called detect_mask_single_image_using_restapi(). This method sends a requests and gets a JSON response. You can make changes according to your needs.

bendangnuksung avatar Aug 07 '20 07:08 bendangnuksung

hey @bendangnuksung, wow thank you so much for the reply. Much appreciated. Do you have any versions that can accept b64 encoded images as a single input? I'm trying to pass in images through your method to gcp ai platform and the images seem to be too large. Thanks again!

jrash33 avatar Aug 07 '20 15:08 jrash33

Sorry, I do not have any other version that sends requests using a b64 encoded image. The restapi serving model seems to take only list as an input. I do not have any solution as of now but I can tell you two workaround which can help you:

  1. (Easy) Make a flask server to work as a medium between your client and serving model. Make GRPC request from your Flask to your serving model as it is much faster.
  2. (Hard) Before converting the h5 model, create and attach a training head that accepts image in base64 as part of your model. Then convert your model to serving model. You can look in this AOCR repo which accepts base64 as image, does all the conversion internally inside a model.

Unfortunately, I do not have time to do all this. Do let me know if you find any other way.

bendangnuksung avatar Aug 07 '20 17:08 bendangnuksung