segmentation_models
segmentation_models copied to clipboard
ValueError: Layer model_8 expects 1 input(s), but it received 2 input tensors
Getting an error when running model.fit in the multiple category training example.
history = model.fit( train_dataloader, steps_per_epoch=len(train_dataloader), epochs=EPOCHS, callbacks=callbacks, validation_data=valid_dataloader, validation_steps=len(valid_dataloader), )
Any ideas on what may be causing it? I suspect it has to do with my train_dataloader object, but I've prepared it as shown in the example.
ValueError: in user code:
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:784 train_function *
return step_function(self, iterator)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:774 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1261 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2794 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3217 _call_for_each_replica
return fn(*args, **kwargs)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:767 run_step **
outputs = model.train_step(data)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py:733 train_step
y_pred = self(x, training=True)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py:977 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
F:\anaconda3\envs\tf-n-gpu\lib\site-packages\tensorflow\python\keras\engine\input_spec.py:204 assert_input_compatibility
raise ValueError('Layer ' + layer_name + ' expects ' +
ValueError: Layer model_8 expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=uint8>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None, None, None) dtype=float32>]
Can you show your data_loader
code?
I am using the multiclass segmentation example code that was provided as a basis.
`class Dataloder(keras.utils.Sequence): """Load data from dataset and form batches
Args:
dataset: instance of Dataset class for image loading and preprocessing.
batch_size: Integet number of images in batch.
shuffle: Boolean, if `True` shuffle image indexes each epoch.
"""
def __init__(self, dataset, batch_size=1, shuffle=False):
self.dataset = dataset
self.batch_size = batch_size
self.shuffle = shuffle
self.indexes = np.arange(len(dataset))
self.on_epoch_end()
def __getitem__(self, i):
# collect batch data
start = i * self.batch_size
stop = (i + 1) * self.batch_size
data = []
for j in range(start, stop):
data.append(self.dataset[j])
# transpose list of lists
batch = [np.stack(samples, axis=0) for samples in zip(*data)]
return batch
def __len__(self):
"""Denotes the number of batches per epoch"""
return len(self.indexes) // self.batch_size
def on_epoch_end(self):
"""Callback function to shuffle indexes each epoch"""
if self.shuffle:
self.indexes = np.random.permutation(self.indexes)`
Were you able to figure it out? You error is just showing that you're providing two tensors as a list, which is coming from __getitem__
, your batch size is set correctly, right? (i.e., your model is expecting a batch size of 2)
@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list
def __getitem__(self, i):
# collect batch data
start = i * self.batch_size
stop = (i + 1) * self.batch_size
data = []
for j in range(start, stop):
data.append(self.dataset[j])
# transpose list of lists
batch = [np.stack(samples, axis=0) for samples in zip(*data)]
# newer version of tf/keras want batch to be in tuple rather than list
return tuple(batch)
Above modification works for me
where to do these modification?
@jkViswanadham @WeiChihChern's modified function is for the dataloader
class provided in this repository.
I did this modification and it worked, however my desktop turned black screen at the second epoch! Do you have any ideas what i should change? Thank you very much
Very frustrated by these nerdy retro-compatibility breaks...
My boss is not interested in bugs that appears in a software that was already working and crosslinked libraries that don't work together anymore.
I don't know you, but after 30+ years of programming I am quite fed up to lose my time like this. Free software doesn't mean the freedom to crash somebody's work because read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list
.
Not against you, guys, I only need to steam off.
@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list
def __getitem__(self, i): # collect batch data start = i * self.batch_size stop = (i + 1) * self.batch_size data = [] for j in range(start, stop): data.append(self.dataset[j]) # transpose list of lists batch = [np.stack(samples, axis=0) for samples in zip(*data)] # newer version of tf/keras want batch to be in tuple rather than list return tuple(batch)
Above modification works for me
I have found the same casting hint as well; though it returns the error message like: NotImplementedError: Cannot convert a symbolic Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
@JordanMakesMaps any thoughts or former experiences what to be debugged? how do you take up debugging when this error message pops up?
@NavidCOMSC I use a custom dataloader instead of the one provided here in the repo. I just checked my code and I do in fact return a tuple instead of a list. I guess I ran into that error a while back, made the change and completely forgot about it:
...
batch_x = np.array( processed_images )
batch_y = np.array( processed_masks )
del the_tile, the_mask, one_hot_mask, processed_image, processed_mask
return (batch_x, batch_y)
@JordanMakesMaps read it somewhere stating that in newer tensorflow or keras require the return batch as tuple rather than list
def __getitem__(self, i): # collect batch data start = i * self.batch_size stop = (i + 1) * self.batch_size data = [] for j in range(start, stop): data.append(self.dataset[j]) # transpose list of lists batch = [np.stack(samples, axis=0) for samples in zip(*data)] # newer version of tf/keras want batch to be in tuple rather than list return tuple(batch)
Above modification works for me
@qubvel this should be integrated in the examples.