H-DenseUNet
H-DenseUNet copied to clipboard
some questions about train_hybrid.py
I followed your advise: 3.Train 2D DenseUnet: First, you need to download the pretrained model from ImageNet Pretrained, extract it and put it in the folder 'model'. Then run:
sh bash_train.sh
And I save the model in Experiments/model Then I edit the train_hybrid.py and change the model_weight = ./Experiments/model/weights.96-0.03.hdf5(the model I trained in step 3)
Then I run the train_hybrid.py but in vain.
CUDA_VISIBLE_DEVICES='2' python train_hybrid.py -mode 3dpart Using TensorFlow backend.
Creating and compiling model...
Traceback (most recent call last):
File "train_hybrid.py", line 212, in
So I misunderstand the step? By the way ,I want to know whether I can use the model I trained in step 3 to directly generate the liver and tumor mask?I tried to edit the test.py and change the model_weight= the model I trained in step 3. Then the error occured. up0_sum = add([line0, up0]) File "Keras-2.0.8/keras/layers/merge.py", line 519, in add return Add(**kwargs)(inputs) File "Keras-2.0.8/keras/engine/topology.py", line 577, in call self.build(input_shapes) File "Keras-2.0.8/keras/layers/merge.py", line 84, in build output_shape = self._compute_elemwise_op_output_shape(output_shape, shape) File "Keras-2.0.8/keras/layers/merge.py", line 55, in _compute_elemwise_op_output_shape str(shape1) + ' ' + str(shape2)) ValueError: Operands could not be broadcast together with shapes (32, 32, 2208) (32, 32, 3840) Anything else I have to edit?
Hi,
"ValueError: You are trying to load a weight file containing 1 layers into a model with 682 layers."
I think this error occurs because the training file did not use multi-gpu, but the model weights from step 2 is trianed with multi-gpu. So, try to use
"model = make_parallel(model, args.b/10, mini_batch=10)"
in train_hybird.py before you load model weights.
Hello, I also encountered similar problems. I save the 2d model in Experiments/model Then I edit the train_hybrid.py and change the model_weight = ./Experiments/model/weights.64-0.04.hdf5(the model I trained in step 3) The 2d training, I only use one gpu, and train_hybrid.py is same.
Traceback (most recent call last):
File "train_hybrid.py", line 212, in
I should modify what?
Hello, I also encountered similar problems. I save the 2d model in Experiments/model Then I edit the train_hybrid.py and change the model_weight = ./Experiments/model/weights.64-0.04.hdf5(the model I trained in step 3) The 2d training, I only use one gpu, and train_hybrid.py is same.
Traceback (most recent call last): File "train_hybrid.py", line 212, in train_and_predict(args) File "train_hybrid.py", line 145, in train_and_predict model.load_weights(args.model_weight) File "/home/zly/miniconda3/envs/tf/lib/python3.6/site-packages/Keras-2.0.8-py3.6.egg/keras/engine/topology.py", line 2627, in load_weights File "/home/zly/miniconda3/envs/tf/lib/python3.6/site-packages/Keras-2.0.8-py3.6.egg/keras/engine/topology.py", line 3076, in load_weights_from_hdf5_group ValueError: You are trying to load a weight file containing 494 layers into a model with 682 layers.
I should modify what?
I am still trying to figure out the problem.We can discuss through email or qq,[email protected].
OK,set model.load_weight(... , by_name=True) is ok! Only load 2d weight is ok.
| | kemgine_zly 邮箱:[email protected] |
Signature is customized by Netease Mail Master
On 05/16/2019 15:25, GilbertKun wrote:
Hello, I also encountered similar problems. I save the 2d model in Experiments/model Then I edit the train_hybrid.py and change the model_weight = ./Experiments/model/weights.64-0.04.hdf5(the model I trained in step 3) The 2d training, I only use one gpu, and train_hybrid.py is same.
Traceback (most recent call last): File "train_hybrid.py", line 212, in train_and_predict(args) File "train_hybrid.py", line 145, in train_and_predict model.load_weights(args.model_weight) File "/home/zly/miniconda3/envs/tf/lib/python3.6/site-packages/Keras-2.0.8-py3.6.egg/keras/engine/topology.py", line 2627, in load_weights File "/home/zly/miniconda3/envs/tf/lib/python3.6/site-packages/Keras-2.0.8-py3.6.egg/keras/engine/topology.py", line 3076, in load_weights_from_hdf5_group ValueError: You are trying to load a weight file containing 494 layers into a model with 682 layers.
I should modify what?
I am still trying to figure out the problem.We can discuss through email or qq,[email protected].
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
This is due to the model loading errors. I make it more clearly in train_hybrid.py.
Since the 2d training using the multiple GPU, then when loading to denseunet_3d model, we should use
model.load_weights(args.model_weight, by_name=True, by_gpu=True, two_model=True, by_flag=True)
.
This is hybrid training with fixed bn, and 2d model part.
After training "denseunet_3d", we finetune the model using "dense_rnn_net". At this time, the training is on both 2d and 3d part.
if args.arch == "3dpart":
model = denseunet_3d(args)
model_path = "/3dpart_model"
sgd = SGD(lr=1e-3, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss=[weighted_crossentropy])
model.load_weights(args.model_weight, by_name=True, by_gpu=True, two_model=True, by_flag=True)
else:
model = dense_rnn_net(args)
model_path = "/hybrid_model"
sgd = SGD(lr=1e-3, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss=[weighted_crossentropy])
model.load_weights(args.model_weight)
Hi,what's your python version?
Any way to make it paralleled? training on one TITAN xp is taking way too long