keras
keras copied to clipboard
"ValueError: The layer sequential has never been called and thus has no defined output." when the model's been build and called
I am currently using tensorflow 2.17 with keras 3.4.1 under Ubuntu 24.04 LTS. I have also reproduced the issue with tf-nightly 2.18.0.dev20240731 (keras nightly 3.4.1.dev2024073103).
I encountered the issue when i downloaded a model I have ran on a cluster under tf 2.17/keras 3.4.1. I then tried to obtain some saliency maps on my computer after re-building the model without redifining all its layers from scratch.
See the following google drive for a reprex with the code, model and a data sample: https://drive.google.com/drive/folders/15J_ghWXWbs8EmSVXedH6sJRvJcPUSTIW?usp=sharing
But it raises the following traceback:
`ValueError Traceback (most recent call last)
Cell In[1], line 45
42 class_activation_map = tf.expand_dims(class_activation_map, axis=-1)
43 return class_activation_map
---> 45 layer_cam_test = layer_cam(img = test_sample, model=model, label_index = 0)
Cell In[1], line 24, in layer_cam(img, label_index, model)
22 print(layer_names)
23 for layer_name in layer_names[-1:]:
---> 24 grad_model = tf.keras.models.Model([model.inputs], [model.get_layer(layer_name).output, model.output]) #bug's here
25 with tf.GradientTape() as tape:
26 tape.watch(img)
File ~/miniconda3/envs/envtfnightly/lib/python3.11/site-packages/keras/src/ops/operation.py:266, in Operation.output(self)
256 @property
257 def output(self):
258 """Retrieves the output tensor(s) of a layer.
259
260 Only returns the tensor(s) corresponding to the *first time*
(...)
264 Output tensor or list of output tensors.
265 """
--> 266 return self._get_node_attribute_at_index(0, "output_tensors", "output")
File ~/miniconda3/envs/envtfnightly/lib/python3.11/site-packages/keras/src/ops/operation.py:285, in Operation._get_node_attribute_at_index(self, node_index, attr, attr_name)
269 """Private utility to retrieves an attribute (e.g. inputs) from a node.
270
271 This is used to implement the properties:
(...)
282 The operation's attribute `attr` at the node of index `node_index`.
283 """
284 if not self._inbound_nodes:
--> 285 raise ValueError(
286 f"The layer {self.name} has never been called "
287 f"and thus has no defined {attr_name}."
288 )
289 if not len(self._inbound_nodes) > node_index:
290 raise ValueError(
291 f"Asked to get {attr_name} at node "
292 f"{node_index}, but the operation has only "
293 f"{len(self._inbound_nodes)} inbound nodes."
294 )
ValueError: The layer sequential has never been called and thus has no defined output.
Click to add a cell.`
There is two workarounds where the value error is not raised: 1°) When using grad_model = keras.models.Model( [model.inputs], [model.get_layer(last_conv_layer_name).output, model.get_layer(Name_of_last_deep_layer).output]) but it results in none gradients in the rest of my code 2°) When redifining completely the model from scratch and loading only the weights, i.e., when using:
model = tf.keras.models.Sequential([
tf.keras.Input(shape=(27, 75, 93, 81, 1)), # time_steps, depth, height, width, channels
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(6, kernel_size=7, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(32, kernel_size=3, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(128, kernel_size=2, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(256, kernel_size=2, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.Flatten()),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.Conv1D(256, kernel_size=5, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv1D(512, kernel_size=3, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv1D(1024, kernel_size=2, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(2, activation='softmax')])
model.load_weights(...) -> this one doesn't raises any error
Thanks a lot!
Hi @Senantq, could you try calling model directly via model(...) rather than model.call? Additionally I see that model.build was called without an input shape.
Could you also supply the original source code of the model rather than just the .keras file?
Hi @nkovela1 and thank you for your help. I have updated the reprex, added the model code into it, and made it a bit more explicit. Do not hesitate to ask if anything's still lacking. Have a nice day,
@Senantq
Isn't this the same issue as fixed here?
https://github.com/keras-team/keras/issues/20155#issue-2482970454
You may try using keras-nightly
I have just installed keras-nightly 3.5.0.dev2024090403 in a new conda environment, and it does'nt solve the problem unfortunately
model = keras.Sequential([
Input(shape=(10, 10, 3)),
Conv2D(filters=32, kernel_size=3),
])
model_2 = Model([model.inputs], [model.output]) # <--change
errors in a similar way.
output is an output tensor, not a layer. One can use outputs which returns a KerasTensor with a history (layer information.)
Specifically, model.outputs[-1] or the get layer equivalent.
Hello ghsanti and thank you for answer. I must admit that it is still not totally clear to me. Particularly, I don't see why
from tensorflow import keras
model = keras.Sequential([
keras.Input(shape=(10, 10, 3)),
keras.layers.Conv2D(filters=32, kernel_size=3)])
model_2 = keras.Model([model.inputs], [model.output])
worked with keras 2 (installed via tensorflow==2.10) and not with the newer versions. Have I missed something during the change to keras 3? I don't see it listed as one of the new major releases features at https://github.com/keras-team/keras/issues/18467 Plus, it feels like it breaks a of not that old codes such as saliency methods or transfer learning codes.
@Senantq
- This works
model = keras.Sequential([ Input(shape=(4, 4, 3)), Flatten(), Dense(units=5)])
model(keras.Input((4, 4, 3)))
model_2 = keras.Model([model.inputs], [model.output])
- This too:
i= Input(shape=(4, 4, 3))
x= Flatten()(i)
x= Dense(units=5)(x)
model1 = keras.Model(i, x)
model2 = keras.Model([model1.inputs], [model1.output])
Sequential needs the extra call to build the layers.
Other related threads:
- One needs to call the model with a Symbolic Tensor or an Input layer (which kind of defeats the purpose of Input in the first place? But it's how it's done since it needs an extra call.)
Someone comments about the same contradiction here
(...) when I am initializing the densenet model, I am providing the input shape. Then why do I need to provide it two times?
-
If you call the model it with a standard list/tensor etc it will not change anything. (this should be clear in the error message.)
-
I'd expect that the
callis automatic if Input is present (a fix to the source code.)
Here is a comment suggesting something similar.
I agree that the error message should be at least a bit more explicit if possible. In any case, thank for the time spent.
Hi @Senantq,
Are you still able to reproduce this issue ?
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 28 days. Please reopen if you'd like to work on this further.