FASNet
FASNet copied to clipboard
can't use weight directly
I get a error when i run test.py : ValueError: Dimension 0 in both shapes must be equal, but are 25088 and 4608. Shapes are [25088,256] and [4608,256]. for 'Assign_84' (op: 'Assign') with input shapes: [25088,256], [4608,256]
Same issue...
I have an error like ...
Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_4/MaxPool' (op: 'MaxPool') with input shapes: [?,1,81,128].
Please help @OeslleLucena as we are using your best work !!!
Which keras backend are you using? I got the Negative dimension error if I tried to run the code with tensorflow backend.
It seems something wrong with the input shape, the authors did not provide the suitable input shape, it always causes the error. By the way, the model is derived by using torch backend maybe, just add code as below, will solve the Negative dimension problem: """ from keras import backend as K K.set_image_dim_ordering('th') """
you can set input shape 112*112
For anyone still facing the issue of 'Negative Dimension', it is getting caused due to a possible typo in the usage of Convolution2D() function calls in the load_model() method. For instance, for the below line:
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
It is getting interpreted as kernel size = 3 and stride =3. When stride != 1, even doing padding =='same' is not going to give a feature map of the same size as before. Due to this, the feature map keeps decreasing even in the consecutive Convolution2D calls( whereas in VGG16 architecture, consecutive Conv layers are of the same feature size). Due to this, on line 38 when MaxPooling2D is done, the feature map size is reduce to 1x1. Doing a max pooling with kernel size 2x2 on a feature map of 1x1 is obviously going to give error.
In order to resolve this, I made a minor change:
model.add(Convolution2D(64, (3, 3), activation='relu', name='conv1_1'))
here kernel_size = (3,3) and strides = (1,1) by default.
This solved the issue of 'Negative Dimension Error' for me.
Also, please note that the size of the input image has to be 96x96. This has been mentioned by the author(@OeslleLucena) here. In short, because the model was set to use an input size of 96x96 during training, we're restricted to use the same during inference. Otherwise for VGG16, you can use sizes below 224x224.