keras-yolo3
keras-yolo3 copied to clipboard
TypeError: Cannot interpret feed_dict key as Tensor
Ive been receiving this error
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("keras_learning_phase:0", shape=(), dtype=bool) is not an element of this graph. Aborted (core dumped)
as a result of running this section of code in line 116 of the yolo object file
out_boxes, out_scores, out_classes = self.sess.run( [self.boxes, self.scores, self.classes], feed_dict={ self.yolo_model.input: image_data, self.input_image_shape: [image.size[1], image.size[0]], K.learning_phase(): 0 })
I'm not sure if its a tensorflow dependency issue or an issue with my input image, but any help solving it would be greatly appreciated
Did you solve this issue?
you can comment out this code "keras_learning_phase:0" , it's ok to run, I have tried
@williamlake Have you solved this problem? I have the same bug with yours.
@yoyoshuang I have tried your suggest, but there's another bug. Could you please tell me in detail? File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py", line 1167, in predict steps=steps) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 2666, in call return self._call(inputs) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 2635, in _call session) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 2587, in _make_callable callable_fn = session._make_callable_from_options(callable_opts) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1483, in _make_callable_from_options return BaseSession._Callable(self, callable_options) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1441, in init session._session, options_ptr, status) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: FeedInputs: unable to find feed output input_img:0 Exception ignored in: <bound method BaseSession._Callable.del of <tensorflow.python.client.session.BaseSession._Callable object at 0x000001C6041C9E10>> Traceback (most recent call last): File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1467, in del self._session._session, self._handle, status) File "D:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: No such callable handle: 1949983367240
Is the first time I see someone passing the Keras
execution mode to tf.Session()
as a placeholder (a <tf.Tensor 'keras_learning_phase:0' shape=<unknown> dtype=bool>
type tensor to be precise). I'm not very knowledgeable about the inner workings of the stack, but my guess is that this is done to deactivate layers like BatchNorm that are not supposed to be executed on inference time; inference mode is denoted as K.learning_phase(): 0
.
I guess if you omit this place, maybe it defaults to training??
Actually I encounter issues when putting the model on a tensorflow-serving instance. The server refuses to process my request unless I somehow include this extra tensor in my request -.-
Here I'm converting an image to a proper size np array, fetching the graph input and output tensors by name (previously y modified names by prependng "out_" )and instantiating a pair of placeholders for image size and the execution mode flag (Keras
learning phase)
# Numpy.ndarray , shape=(1, 416, 416, 3), dtype=float32
resized_image = preprocess_image(PIL_image )
# Tensor("input_1:0", shape=(?, ?, ?, 3), dtype=float32)
image_ph = sess.graph.get_tensor_by_name('input_1:0')
# Tensor("out_conv2d_59:0", shape=(?, ?, ?, 255), dtype=float32)
boxes_ph = sess.graph.get_tensor_by_name('out_conv2d_59:0')
# Tensor("out_conv2d_67:0", shape=(?, ?, ?, 255), dtype=float32)
scores_ph = sess.graph.get_tensor_by_name('out_conv2d_67:0')
# Tensor("out_conv2d_75:0", shape=(?, ?, ?, 255), dtype=float32)
classes_ph = sess.graph.get_tensor_by_name('out_conv2d_75:0')
#Tensor("Placeholder:0", shape=(2,), dtype=float32)
input_image_shape_ph = tf.placeholder(tf.float32, shape=(2, ))
# Tensor("keras_learning_phase_1:0", dtype=bool)
K_learning_phase_ph = tf.placeholder(tf.bool, shape=(),name="keras_learning_phase")
Then I execute the session:
out_boxes, out_scores, out_classes = sess.run(
[boxes_ph, scores_ph, classes_ph],
feed_dict={ image_ph : resized_image,
input_image_shape_ph : [resized_image.shape[1],resized_image.shape[0]],
K_learning_phase_ph : 0 })
And I get the same error whether I pass the K_learning_phase_ph
placeholder or not:
InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool [[Node: batch_normalization_1/keras_learning_phase = Placeholderdtype=DT_BOOL, shape=
, _device="/job:localhost/replica:0/task:0/cpu:0" ]]
Help? T.T
you can comment out this code "keras_learning_phase:0" , it's ok to run, I have tried
I am also doing this, and also works.
@slothkong Have you solved that problem. Recently I want to extract the features of those three output layers of yolo, in which the dimension should be (13,13,255),(26,26,255) and (52,52,255). But when I feed the image as input, and print the dimensions of output layers, it is always like this [<tf.Tensor 'conv2d_59/BiasAdd:0' shape=(?, ?, ?, 255) dtype=float32>, <tf.Tensor 'conv2d_67/BiasAdd:0' shape=(?, ?, ?, 255) dtype=float32>, <tf.Tensor 'conv2d_75/BiasAdd:0' shape=(?, ?, ?, 255) dtype=float32>]
Any advice? Thanks in advance!
removing the keras_learning_phase:0
inside sess.run
in method detect_image
seem to do the trick. @qqwweee can you please explain why use this key?
two step to solve this problem: (1) modify the "init(self, **kwargs)" method: def init(self, **kwargs): self.dict.update(self._defaults) # set up default values self.dict.update(kwargs) # and update with user overrides self.class_names = self._get_class() self.anchors = self._get_anchors() self.sess = K.get_session() with self.sess.as_default(): with self.sess.graph.as_default(): self.K_learning_phase = K.learning_phase() self.boxes, self.scores, self.classes = self.generate()
(2) modify the "detect_image(self, image)" method: out_boxes, out_scores, out_classes = self.sess.run( [self.boxes, self.scores, self.classes], feed_dict={ self.yolo_model.input: image_data, self.input_image_shape: [image.size[1], image.size[0]], self.K_learning_phase: 0 })
done.
@langziwuqing i follow your advice and modified my code ,the yolo3 project ,but i get another bug like this:
Traceback (most recent call last):
File "yolo_video.py", line 73, in
i am a apprentice of this area,what should i do? please
Input image filename:images/img00030.jpg
(416, 416, 3)
Traceback (most recent call last):
File "yolo_video.py", line 73, in
i am a apprentice of this area,what should i do? please
@langziwuqing i follow your advice and modified my code ,the yolo3 project ,but i get another bug like this: Traceback (most recent call last): File "yolo_video.py", line 73, in detect_img(YOLO(**vars(FLAGS))) File "yolo_video.py", line 15, in detect_img r_image = yolo.detect_image(image) File "D:\YOLO\keras-yolo3-master\yolo.py", line 138, in detect_image self.K_learning_phase(): 0 TypeError: 'Tensor' object is not callable
i am a apprentice of this area,what should i do? please
I have the same problem, have you solved it
In my case, I encountered the same error when creating the session (i.e., creating a YOLO class variable) from a main thread and running the session (i.e., calling YOLO.detect_image()) from a different thread in a callback function. In this case when using multiple threads, you have to set the default graph in the non-main threads before running detection. The change below fixed the issue for me. (I am running tensorflow-gpu==1.14 with keras==2.2.4)
Note: some references state that you also have to set the session again, but in my case I found that setting the default graph was sufficient to prevent the error. I have included but commented out the line to set the session in case anyone else finds it necessary.
with self.sess.graph.as_default(): # new line (required)
#K.tensorflow_backend.set_session(self.sess) # new line (wasn't required for me)
out_boxes, out_scores, out_classes = self.sess.run(
[self.boxes, self.scores, self.classes],
feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.size[1], image.size[0]],
K.learning_phase(): 0
})
I also faced this issue in yolov3 model. It was because of I copied wrong utils.py file .Thera was missing get_colors_for_classes and draw_boxes so u can copy utils.py from this link [https://github.com/dudeperf3ct/DL_Notebooks/tree/master/Object%20Detection/Keras/yolo] hopefully it would help if u r also stuck on yolov3.
you can comment out this code "keras_learning_phase:0" , it's ok to run, I have tried
This worked for me
Thanks @sbillin,
when using multiple threads, you have to set the default graph in the non-main threads before running detection
This worked for me. I encountered this issue when I used YOLOv3 in FastAPI.