caffe
caffe copied to clipboard
Forward pass with different batch size cause segmentation fault in C++
Hi, all!
My question is quite similar with https://github.com/intel/caffe/issues/150,but its code is python,I work well with that python code. However i get segmentation fault in C++. Here is my C++ code
#define CPU_ONLY
#include <caffe/caffe.hpp>
#include <iostream>
using namespace caffe; // NOLINT(build/namespaces)
using namespace std;
int channel = 3;
int height = 227;
int width = 227;
int main() {
char model_file[] = "/caffe/models/bvlc_alexnet/deploy.prototxt";
char weights_file[] = "/caffe/models/bvlc_alexnet/bvlc_alexnet.caffemodel";
Caffe::set_mode(Caffe::CPU);
static Net<float>* net_ = new Net<float>(model_file, TEST);
net_->CopyTrainedLayersFrom(weights_file);
for (int batch_size = 1; batch_size < 5; batch_size++) {
Blob<float>* input_layer = net_->input_blobs()[0];
input_layer->Reshape(batch_size, channel, height, width);
net_->Reshape();
cout << "forward begin with batch_size " << batch_size << endl;
net_->Forward();
cout << "forward end with batch_size " << batch_size << endl;
}
return 0;
}
alexnet contains fullyconnect layer, it doesn't allow variable batch size.
Hi, ftian1.
Firstly, thanks for reply
I still have some questions
- why the python code in issue 150 works well but my C++ code get segmentation fault ?
- where I can get more info about which layers allow variable batch size and which layers not allow ?
- Is there any way to use alexnet with variable batch size with MKLDNN engine ?
- extra question: i found that using bigger batch-size do not improve FPS(frame per second) in intel-caffe. Usually bigger batch-size usually improve speed in GPU mode. Is it normal in MKLDNN?
I have the same problem. Although my net dosn't contain fullyconnect layer, forward passing with different batch size still cause segmentation fault in c++ programs. @ftian1
fully connection layer weight number usually is oc x ic x ih x iw if the axis is 1. if the axis is 0, the weight number would be oc x in x ic x ih x iw. for alexnet case, it will be the former. so changing "in" is allowed for your case but not allowed for later.
as for the c++ code issue, it's caused by :
- you have to call mn::init() before creating net.
- remove net_->Reshape() call as it's redudant and will bring assertion.
@yflv-yanxia @sysuxiaoming