FASNet icon indicating copy to clipboard operation
FASNet copied to clipboard

can't use weight directly

Open txthanh1178793 opened this issue 5 years ago • 6 comments

I get a error when i run test.py : ValueError: Dimension 0 in both shapes must be equal, but are 25088 and 4608. Shapes are [25088,256] and [4608,256]. for 'Assign_84' (op: 'Assign') with input shapes: [25088,256], [4608,256]

txthanh1178793 avatar Mar 29 '19 04:03 txthanh1178793

Same issue...

Rasoul20sh avatar Jul 02 '19 11:07 Rasoul20sh

I have an error like ... Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_4/MaxPool' (op: 'MaxPool') with input shapes: [?,1,81,128].

Please help @OeslleLucena as we are using your best work !!!

AayushShah25 avatar Jan 15 '20 18:01 AayushShah25

Which keras backend are you using? I got the Negative dimension error if I tried to run the code with tensorflow backend.

nalinmittal-eclipse avatar Jun 09 '20 11:06 nalinmittal-eclipse

It seems something wrong with the input shape, the authors did not provide the suitable input shape, it always causes the error. By the way, the model is derived by using torch backend maybe, just add code as below, will solve the Negative dimension problem: """ from keras import backend as K K.set_image_dim_ordering('th') """

SE2AI avatar Jun 30 '20 03:06 SE2AI

you can set input shape 112*112

lawo123 avatar Dec 01 '20 11:12 lawo123

For anyone still facing the issue of 'Negative Dimension', it is getting caused due to a possible typo in the usage of Convolution2D() function calls in the load_model() method. For instance, for the below line:

model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))

It is getting interpreted as kernel size = 3 and stride =3. When stride != 1, even doing padding =='same' is not going to give a feature map of the same size as before. Due to this, the feature map keeps decreasing even in the consecutive Convolution2D calls( whereas in VGG16 architecture, consecutive Conv layers are of the same feature size). Due to this, on line 38 when MaxPooling2D is done, the feature map size is reduce to 1x1. Doing a max pooling with kernel size 2x2 on a feature map of 1x1 is obviously going to give error.

In order to resolve this, I made a minor change:

model.add(Convolution2D(64, (3, 3), activation='relu', name='conv1_1'))

here kernel_size = (3,3) and strides = (1,1) by default.

This solved the issue of 'Negative Dimension Error' for me.

Also, please note that the size of the input image has to be 96x96. This has been mentioned by the author(@OeslleLucena) here. In short, because the model was set to use an input size of 96x96 during training, we're restricted to use the same during inference. Otherwise for VGG16, you can use sizes below 224x224.

pratikadarsh avatar Aug 09 '21 15:08 pratikadarsh