nsfw_model icon indicating copy to clipboard operation
nsfw_model copied to clipboard

Error when using Tensorflow Inception v3 Model with OpenCV

Open shmakovigor opened this issue 6 years ago • 5 comments

Hello! I'm having problems trying to load a Tensoflow Inception v3 Model using readNetFromTensorflow in OpenCV 4.1.0 with python 3.6:

import cv2
cv.dnn.readNetFromTensorflow("nsfw.299x299.pb")

I get following error:

cv2.error: OpenCV(4.1.0) /Users/travis/build/skvark/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:535: error: (-2:Unspecified error) Input [batch_normalization_1/ones_like] for node [batch_normalization_1/FusedBatchNorm_1] not found in function 'getConstBlob'

Also I have tried to generate pbtxt from pb file and load model like this:

import cv2
cv.dnn.readNetFromTensorflow("nsfw.299x299.pb", "nsfw.299x299.pbtxt")

But get another error:

cv2.error: OpenCV(4.1.0) /Users/travis/build/skvark/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:616: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in function 'addConstNodes'

This issue could be relevant to https://github.com/opencv/opencv/issues/14073 and seems that model file could have some issues. Have anybody succeeded using this model with OpenCV? Thanks for help!

shmakovigor avatar May 30 '19 06:05 shmakovigor

I found that OpenCV dnn only allow inference, so the model need to be optimized for inference. Any ideas how to achieve that?

shmakovigor avatar May 30 '19 18:05 shmakovigor

That's new to me. But from some googling I saw this script

python -m tensorflow.python.tools.optimize_for_inference --input output_graph.pb --output g.pb --input_names=input_1 --output_names=dense_1/Softmax,dense_2/Softmax

Apparently, you can take any Tensorflow model and optimize for inference: https://medium.com/@prasadpal107/saving-freezing-optimizing-for-inference-restoring-of-tensorflow-models-b4146deb21b5

GantMan avatar Jun 03 '19 03:06 GantMan

I have already tried doing this, but with --output_names=dense_3/Softmax and it throws many warnings like:

WARNING:tensorflow:Didn't find expected gamma Constant input to 'batch_normalization_93/FusedBatchNorm_1', found name: "batch_normalization_93/ones_like" instead. Maybe because freeze_graph wasn't run first?

And "optimised" model still work with OpenCV.

With --output_names=dense_1/Softmax,dense_2/Softmax it throws errors:

dense_1/Softmax is not in graph; dense_2/Softmax is not in graph

May be we need to run freeze_graph first as it suggests? Prob I need some checkpoint for that which I can not find in the repo.

shmakovigor avatar Jun 03 '19 06:06 shmakovigor

@shmakovigor Same issue. did you solve this?

Scorpionchiques avatar Jul 03 '19 10:07 Scorpionchiques

All of this should be solved with the latest OpenCV dldt project code (Release 2020-R1).

The command to convert the current Tensorflow 2.1 model is:

python mo_tf.py --input_model frozen_graph.pb --model_name YOUR_DESIRED_NAME --data_type FP16 --mean_values=[0,0,0] --input_shape=[1,224,224,3] --scale_values=[255,255,255] --enable_concat_optimization --reverse_input_channels

The scale, mean and reverse_input values are critical or your model won't work properly, you'll get trash. Do not do mean subtraction or scale specification in opencv's code before running inference, or again, you'll blow things up because that nfo is inside the inference_engine pipeline already. They become part of the model because you passed them on the command line.

@GantMan You can close this out if OP doesn't.

TechnikEmpire avatar Mar 06 '20 00:03 TechnikEmpire