mediapipe icon indicating copy to clipboard operation
mediapipe copied to clipboard

Understanding tflite model inference / where is forward / inference / invoke function?

Open mgarbade opened this issue 3 years ago • 8 comments

This line of code seems to provide inference result after a forward pass of the neural network.

const float* raw_landmarks = raw_tensor->data.f;

However, this is not a function, but some object property.

My question: Where exactly is the forward pass of the neural network happening?

Also:
Shouldn't there be some kind of load("path/to/my_tflite_model.tflite") function in the absl::Status TfLiteTensorsToLandmarksCalculator::Open(CalculatorContext* cc) function?

mgarbade avatar May 31 '22 13:05 mgarbade

Hi @mgarbade . Could you please elaborate this issue to investigate further.

sureshdagooglecom avatar Jun 02 '22 13:06 sureshdagooglecom

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] avatar Jun 09 '22 13:06 google-ml-butler[bot]

I wanted to understand tflite model inference, as I'm still struggling to make mediapipe use my own custom tflite model (for pose classification).
Regarding my questions: Seems like I was looking at the wrong file.

mgarbade avatar Jun 10 '22 09:06 mgarbade

Hi @mgarbade , We have tflite code implementations , please check below links for code. https://source.corp.google.com/piper///depot/google3/third_party/mediapipe/util/tflite/operations/ https://source.corp.google.com/piper///depot/google3/third_party/mediapipe/util/tflite/cpu_op_resolver.cc

sureshdagooglecom avatar Jun 13 '22 07:06 sureshdagooglecom

Hmm I cannot open those links. It asks me to "sign in" (I'm not at google :stuck_out_tongue_winking_eye: )

mgarbade avatar Jun 14 '22 12:06 mgarbade

As I'm still struggling with getting mediapipe inference running, let me expand on my problem:

Mediapipe: Toy example using custom tflite graph

I'm trying to dig into mediapipe and adapt it to perform inference using a custom tflite model. However, this task seems to be harder than expected.

Can someone provide me with a simple toy example?

Task could be

  • send a float array to the graph (say of length = 1 and all values = 0 for simplicity)
  • tflite model adds 1 to each element of the input tensor
  • output tensor is send back to output and logged to console

My tentative solution

Create tflite model

I'm using python to create a simple tflite model that adds 1 to each element of input tensor. Like so

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

# create model
input1 = layers.Input(shape=(2,3))
added = layers.Add()([input1, tf.ones_like(input1)])
model = keras.models.Model(inputs=input1, outputs=added)

# example inference
model(np.array([[1,2], [3,4], [5,6]]))
# output:
# array([[2., 3.],
#        [4., 5.],
#        [6., 7.]]

# convert to tflite 
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# save tflite model 
with open("adder_model_single_input_2x3.tflite", "wb") as file:
    file.write(tflite_model)

Create a mediapipe solution for that tflite model

Based on the mediapipe "Hello World" example, I create a simple code that is supposed to feed a vector of floats to the above mentioned tflite model. Here is the graph protobuf

input_stream: "INPUT:in"
output_stream: "MATRIX:matrix"

node {
  calculator: "ActionCalculator"
  input_stream: "INPUT:in"
  output_stream: "VECTOR_FLOAT:vector_float"
}

node {
  calculator: "VectorToTensorCalculator"
  input_stream: "VECTOR_FLOAT:vector_float"
  output_stream: "MATRIX:matrix"
}

node {
  calculator: "TfLiteConverterCalculator"
  input_stream: "MATRIX:matrix"
  output_stream: "TENSORS:tensors"
}

node {
  calculator: "TfLiteInferenceCalculator"
  input_stream: "TENSORS:tensors"
  output_stream: "TENSORS:tflite_prediction"
  node_options: {
    [type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
      model_path: "mediapipe/models/adder_model_single_input_2x3.tflite"
      delegate { xnnpack {} }
    }
  }
}

I know that the output_stream of the graph is not the output node of the tflite model but some intermediary node ("MATRIX:matrix") that should not be a problem I guess.

The only "self-written" calculator in the above code is the VectorToTensorCalculator which does nothing but converting an input float vector into a mediapipe::Matrix (inspired by this repo) like so

absl::Status VectorToTensorCalculator::Process(CalculatorContext* cc){
            LOG(INFO) << "VectorToTensorCalculator::Process";
            std::vector<float> inputVectorFloat = cc->Inputs().Tag(VectorFloat).Get<std::vector<float>>();

            // Declare matrix
            Matrix matrix;

            int nrows = 2;
            int ncols = 3;
            matrix.resize(nrows, ncols);
            
            // fill matrix with values of input vector
            for (size_t i = 0; i < nrows; i++)
            {
                for (size_t j = 0; j < ncols; j++)
                {
                    int index = i * ncols + j;
                    matrix(i, j) = inputVectorFloat.at(index);
                }
            }

            std::unique_ptr<Matrix> output_stream_collection = std::make_unique<Matrix>(matrix); 
            cc -> Outputs().Tag(OutputMatrix).Add(output_stream_collection.release(), cc->InputTimestamp());
            return absl::OkStatus();
        }

C++ Driver code

For completeness: Here is the slightly modified Mediapipe "hello world" desktop example code to run a graph with the above mentioned config

  CalculatorGraph graph;
  MP_RETURN_IF_ERROR(graph.Initialize(config));

  // Add output poller to graph, looking for "matrix"
  ASSIGN_OR_RETURN(OutputStreamPoller poller, graph.AddOutputStreamPoller("matrix"));

  // Start Graph
  MP_RETURN_IF_ERROR(graph.StartRun({}));

  // Init input vector
  std::vector<float> inputVector;
  for (size_t i = 0; i < 6; i++)
  {
    inputVector.push_back( (float) i );
  }
  
  // Give 10 input packets that contain an input vector.
  for (int i = 0; i < 10; ++i) {
    MP_RETURN_IF_ERROR(graph.AddPacketToInputStream(
        "in", MakePacket<std::vector<float>>(inputVector).At(Timestamp(i))));
  }
  // Close the input stream "in".
  MP_RETURN_IF_ERROR(graph.CloseInputStream("in"));
  mediapipe::Packet packet;

  // Get output packets.
  while (poller.Next(&packet)) {
    auto outputMatrix = packet.Get<Matrix>();
    LOG(INFO) << "outputMatrix: " << outputMatrix;
  }

Issue

The above example crashes with the following error message

`I20220614 11:15:05.629817 19413 tflite_converter_calculator.cc:260] TfLiteConverterCalculator::Process
I20220614 11:15:05.629838 19413 tflite_converter_calculator.cc:402] MP_RETURN_IF_ERROR(CopyMatrixToTensor(matrix, tensor_ptr));
I20220614 11:15:05.629853 19413 tflite_converter_calculator.cc:404] MP_RETURN_IF_ERROR(CopyMatrixToTensor(matrix, tensor_ptr)); Completed
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors (tensor#26 is a dynamic-sized tensor).
F20220614 11:15:05.648741 19358 action.cc:120] Check failed: mediapipe::PrintNetworkOutput().ok() 
*** Check failure stack trace: ***
    @     0x55555604fd54  google::LogMessage::Fail()
    @     0x55555604fc9d  google::LogMessage::SendToLog()
    @     0x55555604f5d4  google::LogMessage::Flush()
    @     0x55555605258a  google::LogMessageFatal::~LogMessageFatal()
    @     0x55555559f088  main
    @     0x7ffff6ca1c87  __libc_start_main
    @     0x55555559e28a  _start
    @              (nil)  (unknown)
Stop reason: signal SIGABRT`

Interesting:

  • The warning "WARNING: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors (tensor#26 is a dynamic-sized tensor)." might be hint to the problem. However it seems like Mediapipe::Matrix is a typedef for Eigen::MatrixXf which is a dynamic matrix. I also tried using a static `Eigen::Matrix<float, 2, 3> instead, but to no avail
  • When I remove the "TfLiteInferenceCalculator" from the graph, the code works fine. Also while debugging I can see how TfLiteInferenceCalculator::Open and TfLiteInferenceCalculator::GetContract are executed, but the program never reaches TfLiteInferenceCalculator::Process, but crashes instead

sorry for the long post :potato:

mgarbade avatar Jun 14 '22 14:06 mgarbade

Some further findings (using the above code, but changing the graph definition)

  • this graph works:
        input_stream: "in"
        output_stream: "out"

        node {
          calculator: "MatrixToTensorCalculator"
          input_stream: "in"
          output_stream: "tensor_features"
        }
        node: {
          calculator: "TensorToMatrixCalculator"
          input_stream: "TENSOR:tensor_features"
          output_stream: "MATRIX:matrix"
        }
        node {
          calculator: "MatrixToVectorCalculator"
          input_stream: "matrix"
          output_stream: "out"
        }
  • these two graphs do not work
        input_stream: "in"
        output_stream: "out"

        node {
          calculator: "TfLiteConverterCalculator"
          input_stream: "Matrix:in"
          output_stream: "TENSORS:image_tensor"
        }

        node {
          calculator: "TfLiteTensorsToFloatsCalculator"
          input_stream: "TENSORS:image_tensor"
          output_stream: "FLOATS:out"
        }

and

        input_stream: "in"
        output_stream: "out"

        node {
          calculator: "TfLiteConverterCalculator"
          input_stream: "Matrix:in"
          output_stream: "TENSORS:image_tensor"
        }

        node {
          calculator: "TfLiteInferenceCalculator"
          input_stream: "TENSORS:image_tensor"
          output_stream: "TENSORS:tensor_features"
          options: {
            [mediapipe.TfLiteInferenceCalculatorOptions.ext] {
              model_path: "mediapipe/models/adder_model_single_input_2x3.tflite"
            }
          }
        }

        node {
          calculator: "TfLiteTensorsToFloatsCalculator"
          input_stream: "TENSORS:tensor_features"
          output_stream: "FLOATS:out"
        }

Error message is the same unintelligable / useless message as shown above.
Also note

  • I'm only using "official Calculators" (provided by mediapipe)
  • Even the model without TfLiteInferenceCalculator crashes, which hints to the fact that the error is probably not coming from that part of the code or the tflite model itself

mgarbade avatar Jun 20 '22 17:06 mgarbade

Ok, I finally found a "working" graph for my simple hello_world / tflite toy-example:

node {
          calculator: "TfLiteConverterCalculator"
          input_stream: "MATRIX:in"
          output_stream: "TENSORS:image_tensor"
          options: {
            [mediapipe.TfLiteConverterCalculatorOptions.ext] {
              zero_center: false
            }
          }
        }

        node {
          calculator: "TfLiteInferenceCalculator"
          input_stream: "TENSORS:image_tensor"
          output_stream: "TENSORS:tensor_features"
          options: {
            [mediapipe.TfLiteInferenceCalculatorOptions.ext] {
              model_path: "mediapipe/models/adder_model_single_input_2x3.tflite"
            }
          }
        }

        node {
          calculator: "TfLiteTensorsToFloatsCalculator"
          input_stream: "TENSORS:tensor_features"
          output_stream: "FLOATS:out"
        }

The corresponding BUILD file looks like this (mediapipe/examples/desktop/hello_world):

licenses(["notice"])

package(default_visibility = ["//mediapipe/examples:__subpackages__"])

cc_binary(
    name = "hello_world",
    srcs = ["hello_world.cc"],
    visibility = ["//visibility:public"],
    deps = [
        "//mediapipe/calculators/tflite:tflite_converter_calculator",    # <- new
        "//mediapipe/calculators/tflite:tflite_inference_calculator",    # <- new
        "//mediapipe/calculators/tflite:tflite_tensors_to_floats_calculator",    # <- new
        
        "//mediapipe/framework:calculator_graph",
        "//mediapipe/framework/port:logging",
        "//mediapipe/framework/port:parse_text_proto",
        "//mediapipe/framework/port:status",
    ],
)

and the "driver code" (again adapted from the hello_world.cc example

  CalculatorGraph graph;
  LOG(INFO) << "MP_RETURN_IF_ERROR(graph.Initialize(config));";
  MP_RETURN_IF_ERROR(graph.Initialize(config));
  ASSIGN_OR_RETURN(OutputStreamPoller poller,
                   graph.AddOutputStreamPoller("out"));
  MP_RETURN_IF_ERROR(graph.StartRun({}));


  // Give 10 input packets that contain a 2x3 Matrix.
  for (int i = 0; i < 10; ++i) {
    int nrows = 2;
    int ncols = 3;
    Matrix inputMatrix;
    inputMatrix.resize(nrows, ncols);
    for (size_t i = 0; i < nrows; i++)
    {
        for (size_t j = 0; j < ncols; j++)
        {
            int index = i * ncols + j;
            inputMatrix(i, j) = (float) index;
        }
    }
    MP_RETURN_IF_ERROR(graph.AddPacketToInputStream(
        "in", MakePacket<Matrix>(inputMatrix).At(Timestamp(i))));
  }
  // Close the input stream "in".
  MP_RETURN_IF_ERROR(graph.CloseInputStream("in"));
  mediapipe::Packet packet;

  // Get output packets.
  LOG(INFO) << "Get output packets";
  int counter = 0;
  while (counter < 10) {
    
    std::cout << "Counter: " << counter << std::endl;
    counter ++;

    if (poller.Next(&packet)){
      auto outputMatrix = packet.Get<std::vector<float>>();
      std::cout << "outputMatrix: ";
      for (auto item : outputMatrix)
        std::cout << item << ", " ;
      std::cout << std::endl;
    }
    else{
      std::cout << "Poller could not get package" << std::endl;
    }

  }
  return graph.WaitUntilDone();

I guess I was always missing at least the following things:

  • TAG name needs to match the exact name mentioned in the corresponding CalculatorGraph source file, so I had both
    • wrong TAG name for input node
    • wrong TAG name for output node
  • forgot to add dependency for each calculator graph to BUILD file

However, there is still a bug that remains:

  • The above code only returns 3 packages whereas the graph had 10 packages given as input! Not sure why this happens yet...

mgarbade avatar Jun 21 '22 12:06 mgarbade

Hello @mgarbade, We are upgrading the MediaPipe Legacy Solutions to new MediaPipe solutions However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.

You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions which can help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. Thank you

kuaashish avatar Apr 28 '23 09:04 kuaashish

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] avatar May 06 '23 01:05 github-actions[bot]

This issue was closed due to lack of activity after being marked stale for past 7 days.

github-actions[bot] avatar May 13 '23 01:05 github-actions[bot]

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] avatar May 13 '23 01:05 google-ml-butler[bot]