Unsupervised-Adaptation-for-Deep-Stereo
Unsupervised-Adaptation-for-Deep-Stereo copied to clipboard
Check failed: MaxBottomBlobs() >= bottom.size() (2 vs. 3) L1Loss Layer takes at most 2 bottom blob(s) as input.
Dear sir, sorry to bother you. I have meet an error when I fine-tune your code with my dataset. The error is: F0427 17:12:09.233387 607 layer.hpp:404] Check failed: MaxBottomBlobs() >= bottom.size() (2 vs. 3) L1Loss Layer takes at most 2 bottom blob(s) as input.
I0427 17:12:09.233264 607 net.cpp:149] Top shape: 4 1 192 384 (294912)
I0427 17:12:09.233271 607 net.cpp:149] Top shape: 4 1 192 384 (294912)
I0427 17:12:09.233276 607 net.cpp:149] Top shape: 4 1 192 384 (294912)
I0427 17:12:09.233283 607 net.cpp:149] Top shape: 4 1 192 384 (294912)
I0427 17:12:09.233288 607 net.cpp:149] Top shape: 4 1 192 384 (294912)
I0427 17:12:09.233291 607 net.cpp:157] Memory required for data: 2190802964
I0427 17:12:09.233299 607 layer_factory.hpp:77] Creating layer flow_loss1
I0427 17:12:09.233316 607 net.cpp:91] Creating Layer flow_loss1
I0427 17:12:09.233325 607 net.cpp:426] flow_loss1 <- blob66_NegReLU6_0_split_0
I0427 17:12:09.233337 607 net.cpp:426] flow_loss1 <- blob65
I0427 17:12:09.233346 607 net.cpp:426] flow_loss1 <- blob65Confidence
I0427 17:12:09.233358 607 net.cpp:400] flow_loss1 -> flow_loss1
F0427 17:12:09.233387 607 layer.hpp:404] Check failed: MaxBottomBlobs() >= bottom.size() (2 vs. 3) L1Loss Layer takes at most 2 bottom blob(s) as input.
*** Check failure stack trace: ***
@ 0x7fd1a56dd5cd google::LogMessage::Fail()
@ 0x7fd1a56df433 google::LogMessage::SendToLog()
@ 0x7fd1a56dd15b google::LogMessage::Flush()
@ 0x7fd1a56dfe1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fd1a5f92ff7 caffe::Layer<>::CheckBlobCounts()
@ 0x7fd1a5f92113 caffe::Layer<>::SetUp()
@ 0x7fd1a60aea08 caffe::Net<>::Init()
@ 0x7fd1a60acd51 caffe::Net<>::Net()
@ 0x7fd1a60e0101 caffe::Solver<>::InitTrainNet()
@ 0x7fd1a60df8ea caffe::Solver<>::Init()
@ 0x7fd1a60df334 caffe::Solver<>::Solver()
@ 0x7fd1a60ed31b caffe::SGDSolver<>::SGDSolver()
@ 0x7fd1a60f1e04 caffe::AdamSolver<>::AdamSolver()
@ 0x7fd1a60f2d38 caffe::Creator_AdamSolver<>()
@ 0x420d66 caffe::SolverRegistry<>::CreateSolver()
@ 0x41c0c4 train()
@ 0x41e639 main
@ 0x7fd1a4191830 __libc_start_main
@ 0x41ae39 _start
@ (nil) (unknown)
已放弃 (核心已转储)
Have you ever met this error again? Thanks very much.
Dear sir, in train.prototxt about line 1670, there are three bottom input:
name: "flow_loss1"
type: "L1Loss"
#bottom: "predict_flow1"
bottom: "blob66"
bottom: "blob65"
bottom: "blob65Confidence"
top: "flow_loss1"
loss_weight: 1
l1_loss_param {
l2_per_location: false
normalize_by_num_entries: true
}
}
When bottom: "blob65" was commented, the error about MaxBottomBlobs() >= bottom.size() (2 vs. 3) L1Loss Layer takes at most 2 bottom blob(s) as input. #6
is solved.
name: "flow_loss1"
type: "L1Loss"
#bottom: "predict_flow1"
bottom: "blob66"
#bottom: "blob65"
bottom: "blob65Confidence"
top: "flow_loss1"
loss_weight: 1
l1_loss_param {
l2_per_location: false
normalize_by_num_entries: true
}
}
Can you give me some suggestion about how to change flow_loss1 layer? Thanks.
Are you sure to be using our implementation of L1Loss layer? One of the difference between our code and the original flownet one is indeed the number of bottom blob allowed.