tf_unet
tf_unet copied to clipboard
Question about padding
Hello, Jakeret.
The u-net code that you uploaded helps me a lot. I really appreciate.
I am trying to understand Unet and starting with Unet toy_problem.
The problem I have is this.
Input Image : 512x512
Output Image(segmented) : 472x472
`def conv2d(x, W,keep_prob_): #conv_2d = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='VALID') conv_2d = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') return tf.nn.dropout(conv_2d, keep_prob_)
def deconv2d(x, W,stride): x_shape = tf.shape(x) output_shape = tf.stack([x_shape[0], x_shape[1]*2, x_shape[2]*2, x_shape[3]//2]) #return tf.nn.conv2d_transpose(x, W, output_shape, strides=[1, stride, stride, 1], padding='SAME') return tf.nn.conv2d_transpose(x, W, output_shape, strides=[1,2,2,1], padding='SAME')
def max_pool(x,n): return tf.nn.max_pool(x, ksize=[1, n, n, 1], strides=[1, n, n, 1], padding='SAME')`
I know that it is characteristic of u-net network that output size becomes smaller than input.
I googled and found that we can have the same size by changing padding option.(padding ='VALID' to padding = 'SAME')
If I keep option to padding = 'VALID' unet works well but output image size is smaller than input image size.
And using padding = 'VALID', I have error like below. May I have some help here?
Thanks.
InvalidArgumentError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1360 try: -> 1361 return fn(*args) 1362 except errors.OpError as e:
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1339 return tf_session.TF_Run(session, options, feed_dict, fetch_list, -> 1340 target_list, status, run_metadata) 1341
~\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in exit(self, type_arg, value_arg, traceback_arg) 515 compat.as_text(c_api.TF_Message(self.status.status)), --> 516 c_api.TF_GetCode(self.status.status)) 517 # Delete the underlying status object from memory otherwise it stays alive
InvalidArgumentError: logits and labels must be same size: logits_size=[1048576,2] labels_size=[0,2] [[Node: softmax_cross_entropy_with_logits_sg = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg/Reshape, softmax_cross_entropy_with_logits_sg/Reshape_1)]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
~\AppData\Roaming\Python\Python36\site-packages\tf_unet-0.1.1-py3.6.egg\tf_unet\unet.py in train(self, data_provider, output_path, training_iters, epochs, dropout, display_step, restore, write_graph, prediction_path) 412 413 test_x, test_y = data_provider(self.verification_batch_size) --> 414 pred_shape = self.store_prediction(sess, test_x, test_y, "_init") 415 416 summary_writer = tf.summary.FileWriter(output_path, graph=sess.graph)
~\AppData\Roaming\Python\Python36\site-packages\tf_unet-0.1.1-py3.6.egg\tf_unet\unet.py in store_prediction(self, sess, batch_x, batch_y, name) 455 loss = sess.run(self.net.cost, feed_dict={self.net.x: batch_x, 456 self.net.y: util.crop_to_shape(batch_y, pred_shape), --> 457 self.net.keep_prob: 1.}) 458 459 logging.info("Verification error= {:.1f}%, loss= {:.4f}".format(error_rate(prediction,
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 903 try: 904 result = self._run(None, fetches, feed_dict, options_ptr, --> 905 run_metadata_ptr) 906 if run_metadata: 907 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 1135 if final_fetches or final_targets or (handle and feed_dict_tensor): 1136 results = self._do_run(handle, final_targets, final_fetches, -> 1137 feed_dict_tensor, options, run_metadata) 1138 else: 1139 results = []
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1353 if handle is None: 1354 return self._do_call(_run_fn, self._session, feeds, fetches, targets, -> 1355 options, run_metadata) 1356 else: 1357 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1372 except KeyError: 1373 pass -> 1374 raise type(e)(node_def, op, message) 1375 1376 def _extend_graph(self):
InvalidArgumentError: logits and labels must be same size: logits_size=[1048576,2] labels_size=[0,2] [[Node: softmax_cross_entropy_with_logits_sg = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg/Reshape, softmax_cross_entropy_with_logits_sg/Reshape_1)]]
Caused by op 'softmax_cross_entropy_with_logits_sg', defined at:
File "C:\Users\Lee Doyle\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\Lee Doyle\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Lee Doyle\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[1048576,2] labels_size=[0,2] [[Node: softmax_cross_entropy_with_logits_sg = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg/Reshape, softmax_cross_entropy_with_logits_sg/Reshape_1)]]
Well the error logits and labels must be same size: logits_size=[1048576,2] labels_size=[0,2] is saying that the prediction and the label shapes are no longer in line after the code changes.
The unet code is automaticalyl croping the training labels. Maybe you should check that bit
@loveoclock did you have any luck with implementing padding="SAME"? I'm having trouble finding where the unet.py script is cropping the training labels.
Has anyone found a solution to the automatic cropping of the training labels?
Did you solved the problem ? I have the same problem.