DIGITS icon indicating copy to clipboard operation
DIGITS copied to clipboard

Can' t solveERROR: Top blob 'data' produced by multiple sources.

Open eiraola opened this issue 4 years ago • 0 comments

I'm trying to train a segmentation model, and this error keeps coming whatever I do, this is my output.log:

I1111 14:16:18.795657 23179 upgrade_proto.cpp:1044] Attempting to upgrade input file specified using deprecated 'solver_type' field (enum)': /var/lib/digits/jobs/20191111-141617-8504/solver.prototxt I1111 14:16:18.795943 23179 upgrade_proto.cpp:1051] Successfully upgraded file specified using deprecated 'solver_type' field (enum) to 'type' field (string). W1111 14:16:18.795953 23179 upgrade_proto.cpp:1053] Note that future Caffe releases will only support 'type' field (string) for a solver's type. I1111 14:16:18.985635 23179 caffe.cpp:197] Using GPUs 0 I1111 14:16:18.985992 23179 caffe.cpp:202] GPU 0: Quadro M4000 I1111 14:16:19.570360 23179 solver.cpp:48] Initializing solver from parameters: test_iter: 3 test_interval: 26 base_lr: 0.01 display: 3 max_iter: 780 lr_policy: "step" gamma: 0.1 momentum: 0.9 weight_decay: 0.0001 stepsize: 258 snapshot: 26 snapshot_prefix: "snapshot" solver_mode: GPU device_id: 0 net: "train_val.prototxt" type: "Adam" I1111 14:16:19.570463 23179 solver.cpp:91] Creating training net from net file: train_val.prototxt I1111 14:16:19.570816 23179 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data I1111 14:16:19.570822 23179 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data I1111 14:16:19.570837 23179 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy_top_1 I1111 14:16:19.570840 23179 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy_top_5 I1111 14:16:19.571075 23179 net.cpp:52] Initializing net from parameters: state { phase: TRAIN } layer { name: "train-data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 224 mean_file: "/var/lib/digits/jobs/20191111-141539-152b/train_db/mean.binaryproto" } data_param { source: "/var/lib/digits/jobs/20191111-141539-152b/train_db/features" batch_size: 10 backend: LMDB } image_data_param { shuffle: true } } layer { name: "train-data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 224 } data_param { source: "/var/lib/digits/jobs/20191111-141539-152b/train_db/labels" batch_size: 10 backend: LMDB } image_data_param { shuffle: true } } layer { name: "conv1_1" type: "Convolution" bottom: "data" top: "conv1_1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu1_1" type: "ReLU" bottom: "conv1_1" top: "conv1_1" } layer { name: "conv1_2" type: "Convolution" bottom: "conv1_1" top: "conv1_2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu1_2" type: "ReLU" bottom: "conv1_2" top: "conv1_2" } layer { name: "pool1" type: "Pooling" bottom: "conv1_2" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2_1" type: "Convolution" bottom: "pool1" top: "conv2_1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2_1" type: "ReLU" bottom: "conv2_1" top: "conv2_1" } layer { name: "conv2_2" type: "Convolution" bottom: "conv2_1" top: "conv2_2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2_2" type: "ReLU" bottom: "conv2_2" top: "conv2_2" } layer { name: "pool2" type: "Pooling" bottom: "conv2_2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv3_1" type: "Convolution" bottom: "pool2" top: "conv3_1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu3_1" type: "ReLU" bottom: "conv3_1" top: "conv3_1" } layer { name: "conv3_2" type: "Convolution" bottom: "conv3_1" top: "conv3_2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu3_2" type: "ReLU" bottom: "conv3_2" top: "conv3_2" } layer { name: "conv3_3" type: "Convolution" bottom: "conv3_2" top: "conv3_3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu3_3" type: "ReLU" bottom: "conv3_3" top: "conv3_3" } layer { name: "pool3" type: "Pooling" bottom: "conv3_3" top: "pool3" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv4_1" type: "Convolution" bottom: "pool3" top: "conv4_1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4_1" type: "ReLU" bottom: "conv4_1" top: "conv4_1" } layer { name: "conv4_2" type: "Convolution" bottom: "conv4_1" top: "conv4_2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4_2" type: "ReLU" bottom: "conv4_2" top: "conv4_2" } layer { name: "conv4_3" type: "Convolution" bottom: "conv4_2" top: "conv4_3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4_3" type: "ReLU" bottom: "conv4_3" top: "conv4_3" } layer { name: "pool4" type: "Pooling" bottom: "conv4_3" top: "pool4" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv5_1" type: "Convolution" bottom: "pool4" top: "conv5_1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5_1" type: "ReLU" bottom: "conv5_1" top: "conv5_1" } layer { name: "conv5_2" type: "Convolution" bottom: "conv5_1" top: "conv5_2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5_2" type: "ReLU" bottom: "conv5_2" top: "conv5_2" } layer { name: "conv5_3" type: "Convolution" bottom: "conv5_2" top: "conv5_3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 512 pad: 1 kernel_size: 3 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5_3" type: "ReLU" bottom: "conv5_3" top: "conv5_3" } layer { name: "pool5" type: "Pooling" bottom: "conv5_3" top: "pool5" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8-5" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } I1111 14:16:19.571194 23179 layer_factory.hpp:77] Creating layer train-data I1111 14:16:19.572324 23179 net.cpp:94] Creating Layer train-data I1111 14:16:19.572342 23179 net.cpp:409] train-data -> data I1111 14:16:19.572358 23179 net.cpp:409] train-data -> label I1111 14:16:19.572371 23179 data_transformer.cpp:25] Loading mean file from: /var/lib/digits/jobs/20191111-141539-152b/train_db/mean.binaryproto I1111 14:16:19.631750 23221 db_lmdb.cpp:35] Opened lmdb /var/lib/digits/jobs/20191111-141539-152b/train_db/features I1111 14:16:19.649268 23179 data_layer.cpp:76] output data size: 10,3,224,224 I1111 14:16:19.664477 23179 net.cpp:144] Setting up train-data I1111 14:16:19.664496 23179 net.cpp:151] Top shape: 10 3 224 224 (1505280) I1111 14:16:19.664502 23179 net.cpp:151] Top shape: 10 (10) I1111 14:16:19.664505 23179 net.cpp:159] Memory required for data: 6021160 I1111 14:16:19.664513 23179 layer_factory.hpp:77] Creating layer train-data I1111 14:16:19.665738 23179 net.cpp:94] Creating Layer train-data F1111 14:16:19.665757 23179 net.cpp:404] Top blob 'data' produced by multiple sources. *** Check failure stack trace: *** @ 0x7f7323a095cd google::LogMessage::Fail() @ 0x7f7323a0b433 google::LogMessage::SendToLog() @ 0x7f7323a0915b google::LogMessage::Flush() @ 0x7f7323a0be1e google::LogMessageFatal::~LogMessageFatal() @ 0x7f73240d4a62 caffe::Net<>::AppendTop() @ 0x7f73240dc709 caffe::Net<>::Init() @ 0x7f73240df1c6 caffe::Net<>::Net() @ 0x7f73240bfcca caffe::Solver<>::InitTrainNet() @ 0x7f73240c10d7 caffe::Solver<>::Init() @ 0x7f73240c1493 caffe::Solver<>::Solver() @ 0x7f7324144335 caffe::Creator_AdamSolver<>() @ 0x40b9a5 train() @ 0x408668 main @ 0x7f732245b830 __libc_start_main @ 0x408dd9 _start @ (nil) (unknown)

Any clues?

eiraola avatar Nov 11 '19 14:11 eiraola