TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10
TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10 copied to clipboard
KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."
When I execute python Object_detection_image.py or the other Object_detection files, I get the following error message:
KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."
I get it if I use Edje's prepared version and the one with my own dataset. I also get the same error when I rewrite the Jupyter notebook example for my own data (Original version works perfectly fine.) I use tensorflow 1.5, CPU only, Python 2.7 and Ubuntu 16.04. A higher tensorflow version does not work for me.
Does anybody has an idea how to solve my issue? Evtl the mistake is in the execute_inference_grapf.py file. I called it with the command:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-6347 --output_file inference_graph/frozen_inference_graph.pb
The original one was: python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-6347 --output_file inference_graph
It did not work for me.
Thanks in advance!
The code stops at this part:
Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
Problem was solved, because the export_inference_graph.py was updated in the tensorflow model zoo.
I used this code now:
Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================
r"""Tool to export an object detection model for inference. Prepares an object detection tensorflow graph for inference using model configuration and a trained checkpoint. Outputs inference graph, associated checkpoint files, a frozen inference graph and a SavedModel (https://tensorflow.github.io/serving/serving_basic.html). The inference graph contains one of three input nodes depending on the user specified option.
-
image_tensor
: Accepts a uint8 4-D tensor of shape [None, None, None, 3] -
encoded_image_string_tensor
: Accepts a 1-D string tensor of shape [None] containing encoded PNG or JPEG images. Image resolutions are expected to be the same if more than 1 image is provided. -
tf_example
: Accepts a 1-D string tensor of shape [None] containing serialized TFExample protos. Image resolutions are expected to be the same if more than 1 image is provided. and the following output nodes returned by the model.postprocess(..): -
num_detections
: Outputs float32 tensors of the form [batch] that specifies the number of valid boxes per image in the batch. -
detection_boxes
: Outputs float32 tensors of the form [batch, num_boxes, 4] containing detected boxes. -
detection_scores
: Outputs float32 tensors of the form [batch, num_boxes] containing class scores for the detections. -
detection_classes
: Outputs float32 tensors of the form [batch, num_boxes] containing classes for the detections. -
detection_masks
: Outputs float32 tensors of the form [batch, num_boxes, mask_height, mask_width] containing predicted instance masks for each box if its present in the dictionary of postprocessed tensors returned by the model. Notes: - This tool uses
use_moving_averages
from eval_config to decide which weights to freeze. Example Usage:
python export_inference_graph
--input_type image_tensor
--pipeline_config_path path/to/ssd_inception_v2.config
--trained_checkpoint_prefix path/to/model.ckpt
--output_directory path/to/exported_model_directory
e.g.:
python export_inference_graph.py
--input_type image_tensor
--pipeline_config_path training/faster_rcnn_inception_v2_pets.config
--trained_checkpoint_prefix training/model.ckpt-6290
--output_directory inference_graph
The expected output would be in the directory path/to/exported_model_directory (which is created if it does not exist) with contents:
- inference_graph.pbtxt
- model.ckpt.data-00000-of-00001
- model.ckpt.info
- model.ckpt.meta
- frozen_inference_graph.pb
- saved_model (a directory)
Config overrides (see the
config_override
flag) are text protobufs (also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override certain fields in the provided pipeline_config_path. These are useful for making small changes to the inference graph that differ from the training or eval config. Example Usage (in which we change the second stage post-processing score threshold to be 0.5): python export_inference_graph
--input_type image_tensor
--pipeline_config_path path/to/ssd_inception_v2.config
--trained_checkpoint_prefix path/to/model.ckpt
--output_directory path/to/exported_model_directory
--config_override "
model{
faster_rcnn {
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.5
}
}
}
}" """ import tensorflow as tf from google.protobuf import text_format import exporter from protos import pipeline_pb2
slim = tf.contrib.slim flags = tf.app.flags
flags.DEFINE_string('input_type', 'image_tensor', 'Type of input node. Can be '
'one of [image_tensor
, encoded_image_string_tensor
, '
'tf_example
]')
flags.DEFINE_string('input_shape', None,
'If input_type is image_tensor
, this can explicitly set '
'the shape of this input tensor to a fixed size. The '
'dimensions are to be provided as a comma-separated list '
'of integers. A value of -1 can be used for unknown '
'dimensions. If not specified, for an image_tensor, the ' 'default shape will be partially specified as ' '
[None, None, None, 3]`.')
flags.DEFINE_string('pipeline_config_path', None,
'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
'file.')
flags.DEFINE_string('trained_checkpoint_prefix', None,
'Path to trained checkpoint, typically of the form '
'path/to/model.ckpt')
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string('config_override', '',
'pipeline_pb2.TrainEvalPipelineConfig '
'text proto to override pipeline_config_path.')
flags.DEFINE_boolean('write_inference_graph', False,
'If true, writes inference graph to disk.')
tf.app.flags.mark_flag_as_required('pipeline_config_path')
tf.app.flags.mark_flag_as_required('trained_checkpoint_prefix')
tf.app.flags.mark_flag_as_required('output_directory')
FLAGS = flags.FLAGS
def main(_): pipeline_config = pipeline_pb2.TrainEvalPipelineConfig() with tf.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f: text_format.Merge(f.read(), pipeline_config) text_format.Merge(FLAGS.config_override, pipeline_config) if FLAGS.input_shape: input_shape = [ int(dim) if dim != '-1' else None for dim in FLAGS.input_shape.split(',') ] else: input_shape = None exporter.export_inference_graph( FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix, FLAGS.output_directory, input_shape=input_shape, write_inference_graph=FLAGS.write_inference_graph)
if name == 'main': tf.app.run()
the first error is shown because you haven't the frozen_inference_graph.pb in this directory: /object_detection/inference_graph.
try this: python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-6347 --output_file inference_graph/frozen_inference_graph.pb
make sure this number : 6347 is the last step you've saved.
@blumenkindC3 how to solved
i had the same error tf1.10
python object_detection/export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path /app/tf_object_detection_api/config/faster_rcnn_inception_v2_pets.config \ --trained_checkpoint_prefix /app/tf_object_detection_api/models/model.ckpt \ --output_directory /app/tf_object_detection_api/models/faster_rcnn_inception_v2_pets
Problem was solved, because the export_inference_graph.py was updated in the tensorflow model zoo.
I used this code now:
Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================
r"""Tool to export an object detection model for inference. Prepares an object detection tensorflow graph for inference using model configuration and a trained checkpoint. Outputs inference graph, associated checkpoint files, a frozen inference graph and a SavedModel (https://tensorflow.github.io/serving/serving_basic.html). The inference graph contains one of three input nodes depending on the user specified option.
image_tensor
: Accepts a uint8 4-D tensor of shape [None, None, None, 3]encoded_image_string_tensor
: Accepts a 1-D string tensor of shape [None] containing encoded PNG or JPEG images. Image resolutions are expected to be the same if more than 1 image is provided.tf_example
: Accepts a 1-D string tensor of shape [None] containing serialized TFExample protos. Image resolutions are expected to be the same if more than 1 image is provided. and the following output nodes returned by the model.postprocess(..):num_detections
: Outputs float32 tensors of the form [batch] that specifies the number of valid boxes per image in the batch.detection_boxes
: Outputs float32 tensors of the form [batch, num_boxes, 4] containing detected boxes.detection_scores
: Outputs float32 tensors of the form [batch, num_boxes] containing class scores for the detections.detection_classes
: Outputs float32 tensors of the form [batch, num_boxes] containing classes for the detections.detection_masks
: Outputs float32 tensors of the form [batch, num_boxes, mask_height, mask_width] containing predicted instance masks for each box if its present in the dictionary of postprocessed tensors returned by the model. Notes:- This tool uses
use_moving_averages
from eval_config to decide which weights to freeze. Example Usage:python export_inference_graph --input_type image_tensor --pipeline_config_path path/to/ssd_inception_v2.config --trained_checkpoint_prefix path/to/model.ckpt --output_directory path/to/exported_model_directory
e.g.: python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-6290 --output_directory inference_graph
The expected output would be in the directory path/to/exported_model_directory (which is created if it does not exist) with contents:
inference_graph.pbtxt
model.ckpt.data-00000-of-00001
model.ckpt.info
model.ckpt.meta
frozen_inference_graph.pb
saved_model (a directory) Config overrides (see the
config_override
flag) are text protobufs (also of type pipeline_pb2.TrainEvalPipelineConfig) which are used to override certain fields in the provided pipeline_config_path. These are useful for making small changes to the inference graph that differ from the training or eval config. Example Usage (in which we change the second stage post-processing score threshold to be 0.5): python export_inference_graph --input_type image_tensor --pipeline_config_path path/to/ssd_inception_v2.config --trained_checkpoint_prefix path/to/model.ckpt --output_directory path/to/exported_model_directory --config_override " model{ faster_rcnn { second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.5 } } } }" """ import tensorflow as tf from google.protobuf import text_format import exporter from protos import pipeline_pb2slim = tf.contrib.slim flags = tf.app.flags
flags.DEFINE_string('input_type', 'image_tensor', 'Type of input node. Can be ' 'one of [
image_tensor
,encoded_image_string_tensor
, ' 'tf_example
]') flags.DEFINE_string('input_shape', None, 'If input_type isimage_tensor
, this can explicitly set ' 'the shape of this input tensor to a fixed size. The ' 'dimensions are to be provided as a comma-separated list ' 'of integers. A value of -1 can be used for unknown ' 'dimensions. If not specified, for animage_tensor, the ' 'default shape will be partially specified as ' '
[None, None, None, 3]`.') flags.DEFINE_string('pipeline_config_path', None, 'Path to a pipeline_pb2.TrainEvalPipelineConfig config ' 'file.') flags.DEFINE_string('trained_checkpoint_prefix', None, 'Path to trained checkpoint, typically of the form ' 'path/to/model.ckpt') flags.DEFINE_string('output_directory', None, 'Path to write outputs.') flags.DEFINE_string('config_override', '', 'pipeline_pb2.TrainEvalPipelineConfig ' 'text proto to override pipeline_config_path.') flags.DEFINE_boolean('write_inference_graph', False, 'If true, writes inference graph to disk.') tf.app.flags.mark_flag_as_required('pipeline_config_path') tf.app.flags.mark_flag_as_required('trained_checkpoint_prefix') tf.app.flags.mark_flag_as_required('output_directory') FLAGS = flags.FLAGSdef main(_): pipeline_config = pipeline_pb2.TrainEvalPipelineConfig() with tf.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f: text_format.Merge(f.read(), pipeline_config) text_format.Merge(FLAGS.config_override, pipeline_config) if FLAGS.input_shape: input_shape = [ int(dim) if dim != '-1' else None for dim in FLAGS.input_shape.split(',') ] else: input_shape = None exporter.export_inference_graph( FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix, FLAGS.output_directory, input_shape=input_shape, write_inference_graph=FLAGS.write_inference_graph)
if name == 'main': tf.app.run()
CAN U EXPLAIN ?? I HAVE THE SAME ISSUE
Hi All, I'm training custom data set in TF2 and now i have finished training the model. To test my model with the test image what should i do??
I tried to follow the same steps as in Tensorflow1 but i couldn't get the desired output. Also, i'm not able to generate the .pb to run the object detection script.
Any suggestions or help would be highly recommended. thanks in advance. below is the screenshot of the error that i get when i run the object detection script from TF1 version.
Hi Jain
Could you help with a similar issue? I have been trying to do the same thing as you do.
Thanks
This Worked For Me .
Your .pb Model Will Created Inside Training/saved_model/saved_model.pb
Just Change --ouput_file
to --output_directory=training
Eg:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_coco.config --trained_checkpoint_prefix training/model.ckpt-1000 --output_directory=training
#note : You have to use any existing directory like training
or object_detection
etc