utensor_cgen icon indicating copy to clipboard operation
utensor_cgen copied to clipboard

How to re-use a WrappedRamTensor and provide new input data

Open RajeshSiraskar opened this issue 6 years ago • 8 comments

Hi,

I am beginner with uTensor and embedded C/C++. I have a little experience around Python and wanted to study development of intelligence at the edge by building models in Python and deploying on Cortex boards. @neil-tan helped me understand the basics and I used his tutorial to begin this understanding.

So passing the input data, wrapped in a WrappedRamTensor works great the 1st time. When I try to provide another instance of input data and do a second pass - it gives me an error. What could I be doing wrong? Does input data tensor have to be thread-safe?

Output with the error

[1] First instance of prediction: For input 10.000
 Input: 10.000 | Expected: 72.999 | Predicted: 71.871

 [2] Second instance of prediction: For input 40.000
[Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found

Source code

  // A single value is being used so Tensor shape is {1, 1} 
  float input_data[1] = {10.0}; 
  Tensor* input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

  // Value predicted by LR model
  S_TENSOR pred_tensor;         
  float pred_value;             
  
  // Compute model value for comparison
  float W = 6.968;
  float B = 3.319;
  float y;

  // First pass: Constant value 10.0 and evaluate first time:
  printf("\n [1] First instance of prediction: For input %4.3f", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  // Second pass: Change input data and re-evalaute:
  input_data[0] = 40.0;
  printf("\n\n [2] Second instance of prediction: For input %4.3f\n", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  printf("\n -------------------------------------------------------------------\n");
  return 0;
}

RajeshSiraskar avatar Mar 02 '19 07:03 RajeshSiraskar

Can you show me what's inside get_LR_model_ctx? And I think we haven't fix some issue of the Context class so reuse it over time will crash the program. Try to create a new Context object before passing it to get_LR_model_ctx function.

dboyliao avatar Mar 03 '19 07:03 dboyliao

Hi @dboyliao

I did try re-creating new instances but it still gives the same errors: Context ctx, ctx_2; and used ctx_2 for the ctx_2.eval().

Have attached a zip file with the C++ created files. Also dboyliao, I tried to understand the public data member of the WrappedRamTensor. Should I be using that to try to assign new data for evaluation? If yes - how do I use that?

Thanks for helping.

LR_model.zip

RajeshSiraskar avatar Mar 03 '19 15:03 RajeshSiraskar

Ah, I think I know what's going wrong. Your input_x is declared as a raw pointer. So after the first ctx.eval, it may point to invalid address. Try add input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data); after input_data[0] = 40.0;

dboyliao avatar Mar 04 '19 14:03 dboyliao

@RajeshSiraskar I suppose your code is as follows:

 Context ctx, ctx2;
  // A single value is being used so Tensor shape is {1, 1} 
  float input_data[1] = {10.0}; 
  Tensor* input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

  // Value predicted by LR model
  S_TENSOR pred_tensor;         
  float pred_value;             
  
  // Compute model value for comparison
  float W = 6.968;
  float B = 3.319;
  float y;

  // First pass: Constant value 10.0 and evaluate first time:
  printf("\n [1] First instance of prediction: For input %4.3f", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  // Second pass: Change input data and re-evalaute:
  input_data[0] = 40.0;
  printf("\n\n [2] Second instance of prediction: For input %4.3f\n", input_data[0]);
  get_LR_model_ctx(ctx2, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx2.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx2.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  printf("\n -------------------------------------------------------------------\n");
  return 0;
}

and you get the same error at second eval ? [2] Second instance of prediction: For input 40.000 [Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found

Is my understanding correct?

Knight-X avatar Mar 04 '19 14:03 Knight-X

@Knight-X the second call to get_LR_model_ctx, should it be ctx2 not ctx?

dboyliao avatar Mar 04 '19 14:03 dboyliao

@dboyliao ya, you are right. typo error

Knight-X avatar Mar 04 '19 15:03 Knight-X

Hi,

I tried all three experiments:

[Change 1: Adding input_x = new WrappedRamTensor] @dboyliao: I was very sure I had tried that earlier but did reconfirm it. This is the run time error I get:

[Error] lib\uTensor\core\context.cpp:32 @add tensor with name "" address already exist in rTable

When I add this:

// Second pass: Change input data and re-evalaute:
input_data[0] = 40.0;
input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

Here there is only ONE Context ctx; I use for both predictions.

[Change 2: Adding Context ctx, ctx2;] @Knight-X

I did retry Context ctx, ctx2; and yes it gives the error mentioned in your post. I reconfirmed there was no typo -- as below

[Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found

get_LR_model_ctx(ctx2, input_x);                   // Pass the 'input' data tensor to the context
pred_tensor = ctx2.get("y_pred:0");                // Get a reference to the 'output' tensor
ctx2.eval();                                       // Trigger the inference engine
pred_value = *(pred_tensor->read<float>(0, 0));

[Change 3: Added BOTH changes together]

Here I combined both the above and here's the output. It does NOT output the correct value but executes without error:

[1] First instance of prediction: For input 10.000
 Input: 10.000 | Expected: 72.999 | Predicted: 71.871

 [2] Second instance of prediction: For input 40.000
 Input: 40.000 | Expected: 282.039 | Predicted: 71.871

RajeshSiraskar avatar Mar 04 '19 15:03 RajeshSiraskar

Hi - Just in case the board matters:

Board: https://os.mbed.com/platforms/ST-Discovery-L476VG/ IDE: PlatformIO

Thanks for helping

RajeshSiraskar avatar Mar 05 '19 04:03 RajeshSiraskar