# Extend Y labels 10 fold, so that all images have labels
Train_Y_spatial = np.repeat(Train_Y, timesteps_TIM, axis=0)
Test_Y_spatial = np.repeat(Test_Y, timesteps_TIM, axis=0)
X = Train_X_spatial.reshape(Train_X_spatial.shape[0], channel, r, w)
y = Train_Y_spatial.reshape(Train_Y_spatial.shape[0], n_exp)`
Here, "Train_X_spatial" has dim e.g. nVid*9 x 224 x 224 x 3.
Then you use "reshape" to get "X" with dim nVid*9 x 3 x 224 x 224.
I think that "reshape" will mess up the image data.
"X = np.transpose(Train_X_spatial, (0, 3, 1, 2))" seems to do the work rightly.
What's your opinion?
Best regards!
I think reshape is fine. I remember the purpose of this is because my keras accepts "channel first". Reshape should be ok all the while. Any issues on your side?
Hi, thanks for response. I know that the code use "channel first", and the problem is not about that. The code is running fine with the "reshape" without any issue. I am saying that "reshape" seems to mess up the image data, because "reshape" does not change the underlying data order.
e.g. a 2x2x3 image: [ [ [r11, g11, b11], [r12, g12, b12] ], [ [r21, g21, b21], [r22, g22, b22] ] ] will be reshaped to a 3x2x2 data: [ [ [r11, g11], [b11, r12]], [[g12, b12], [r21, g21]], [[b21, r22], [g22, b22] ] ], which is not a correct image structure.
I also tested the results of using "reshape" and "transpose", and it turned out using "transpose" can get better results than "reshape". This may be a more convincing proof.
For the flag='st' and single dataset on CASME2, my "reshape" result is:
micro(war): 0.317, F1: 0.258
my "transpose" result is:
war: 0.419, F1: 0.403, uar: 0.355
For the flag='st7se' and signle dataset on CASME2, my "reshape" result is:
war: 0.463, F1: 0.413, uar: 0.411
my "transpose" result is:
war: 0.504, F1: 0.500, uar: 0.445
@bomb2peng Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 15, in main
train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard)
File "/home/lhh/TensorFlow/Micro-Expression-with-Deep-Learning/train.py", line 262, in train
vgg_model = VGG_16(spatial_size = spatial_size, classes=n_exp, channels=3, weights_path='VGG_Face_Deep_16.h5')
File "/home/lhh/TensorFlow/Micro-Expression-with-Deep-Learning/models.py", line 139, in VGG_16
model.load_weights(weights_path)
File "/home/lhh/.pythonlib/lib/python3.6/site-packages/keras/engine/network.py", line 1180, in load_weights
f, self.layers, reshape=reshape)
File "/home/lhh/.pythonlib/lib/python3.6/site-packages/keras/engine/saving.py", line 929, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/home/lhh/.pythonlib/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2430, in batch_set_value
assign_op = x.assign(assign_placeholder)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 594, in assign
return state_ops.assign(self._variable, value, use_locking=use_locking)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 276, in assign
validate_shape=validate_shape)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 59, in assign
use_locking=use_locking, name=name)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3162, in create_op
compute_device=compute_device)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3208, in _create_op_helper
set_shapes_for_outputs(op)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2427, in set_shapes_for_outputs
return _set_shapes_for_outputs(op)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2400, in _set_shapes_for_outputs
shapes = shape_func(op)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2330, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "/home/lhh/anaconda3/envs/dv/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimension 1 in both shapes must be equal, but are 2622 and 1000. Shapes are [4096,2622] and [4096,1000]. for 'Assign_30' (op: 'Assign') with input shapes: [4096,2622], [4096,1000].
What is the reason? Related to the tensorflow version ?Which tensorflow version are you using?