Use smaller kernel size in conv layers
Has anyone tried to use smaller kernel size in the conv layers? The 7*7 kernel size is very large and cause significant amount of computation. However, when I tried to use smaller kernel size, it seems no heat map is learned from the conv layers. Does anyone meet similar issue?
I did this by revising the conv layers to:
layer {
name: "Mconv5_stage2_L2"
type: "Convolution"
bottom: "Mconv4_stage2_L2"
top: "Mconv5_stage2_L2"
param {
lr_mult: 4.0
decay_mult: 1.0
}
param {
lr_mult: 8.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.00999999977648
}
bias_filler {
type: "constant"
}
}
}
Below is the maximum value of several layers, which show Mconv6_stage2_L1 and Mconv6_stage2_L1 are all zeros.
('Mconv1_stage2_L1', (1, 128, 46, 69), 0.22660097)
('Mconv1_stage2_L2', (1, 128, 46, 69), 0.22140989)
('Mconv2_stage2_L1', (1, 128, 46, 69), 0.047528923)
('Mconv2_stage2_L2', (1, 128, 46, 69), 0.059828013)
('Mconv3_stage2_L1', (1, 128, 46, 69), 0.013270046)
('Mconv3_stage2_L2', (1, 128, 46, 69), 0.013880449)
('Mconv4_stage2_L1', (1, 128, 46, 69), 0.005226884)
('Mconv4_stage2_L2', (1, 128, 46, 69), 0.004463576)
('Mconv5_stage2_L1', (1, 128, 46, 69), 0.0017151545)
('Mconv5_stage2_L2', (1, 128, 46, 69), 0.016476745)
('Mconv6_stage2_L1', (1, 128, 46, 69), -0.0)
('Mconv6_stage2_L1', (1, 128, 46, 69), -0.0)
YES!!!! I met with similar issue!! If I recude the Mconv's layer, this issue disapper!
@Hanhanhan11 What do you mean by recude the Mconv's layer? I still have this problem.
Reducing the kernel size of Mconv1-6 stage2 from7 to 3,I just use front two stage.
发自我的 iPhone
在 2018年9月15日,上午10:35,Xiaofei Wu [email protected] 写道:
@Hanhanhan11 What do you mean by recude the Mconv's layer? I still have this problem.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.