segmentation_models_3D
segmentation_models_3D copied to clipboard
ValueError: axes don't match array
Hi, help please! :(
I try to create a simple model like
import segmentation_models_3D as sm
import os
sm.set_framework('tf.keras')
os.environ["KERAS_BACKEND"] = "tensorflow"
model1 = sm.Unet(backbone_name="resnet50", input_shape=(32, 96, 96, 1), encoder_weights="imagenet")
but it says ValueError: axes don't match array
.
I tried #8
to make os.environ["KERAS_BACKEND"] = "tensorflow"
as above shows and nothing!
The tst_keras.py
file works for me, I don't understand nothing! (I'm trying to run the code at the beggining in Visual Studio Code, .ipynb file).
Please :(
Update:
The problem is the number of channels. If I try input_shape=(32, 96, 96, 1)
instead of input_shape=(32, 96, 96, 3)
it works. Can't use Unet for single channel volumes?
As I remember, there is no imagenet weights for single channel input. It can be fixed in code, but currently not supported
Simplest solution is to triplicate input for model)
Simplest solution is to triplicate input for model)
@ZFTurbo
This is a possibility, but it limits my batch size to 3 times smaller. If there is no alternative, it is an option!
In the code of convert_imagenet_weights_to_3D_models.py code, isn't there an option to do this for a channel?
Thank you very much for your quick reply!
- If you increase input from 1 to 3, the required memory for model almost won't change.
- Yes, convert_imagenet_weights_to_3D_models.py - this code can be used to generate required weigths from 2D variant for 1 channel only.
@ZFTurbo
- If you increase input from 1 to 3, the required memory for model almost won't change.
If it changes, it is in fact three times as much. If a data occupies 0.5GB, tripling the channels will occupy 1.5GB. But for the time being, this approximation can work for me.
- Yes, convert_imagenet_weights_to_3D_models.py - this code can be used to generate required weigths from 2D variant for 1 channel only.
Well, I'll try to do it in the future when I have more time!
@ZFTurbo, could you please explain what should be modified in convert_imagenet_weights_to_3D_models.py, so that it is possible to generate weights from 2D variant for 1 channel only?
If I set shape_size_3D = (64, 64, 64, 1)
, I get:
ValueError: Layer bn_data weight shape (1,) is not compatible with provided weight shape (3,).
In case I also set shape_size_2D = (224, 224, 1)
, I get:
ValueError: Cannot assign value to variable ' bn_data/beta:0': Shape mismatch.The variable shape (1,), and the assigned value shape (3,) are incompatible.