CycleGAN icon indicating copy to clipboard operation
CycleGAN copied to clipboard

I think there are some problems in your function input_read() in main.py.

Open anqier0468 opened this issue 7 years ago • 6 comments

I think there are some problems in your function input_read() in main.py. Shouldn't we add code such like "while not coord.should_stop():" to make that the variable image_file_A always get images from variable filename_queue_A ? Anyway ,when I run you code, there are some problems. I am confused how you can get your results. Hope for your help, thank you

anqier0468 avatar Oct 12 '17 03:10 anqier0468

Traceback (most recent call last): File "main.py", line 364, in main() File "main.py", line 358, in main model.train() File "main.py", line 256, in train self.input_read(sess) File "main.py", line 100, in input_read image_tensor = sess.run(self.image_A) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run run_metadata_ptr) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1118, in _run feed_dict_tensor, options, run_metadata) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1315, in _do_run options, run_metadata) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_0_input_producer' is closed and has insufficient elements (requested 1, current size 0) [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](WholeFileReaderV2, input_producer)]] [[Node: DecodeJpeg/_21 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_DecodeJpeg", tensor_type=DT_UINT8, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

Caused by op u'ReaderReadV2', defined at: File "main.py", line 364, in main() File "main.py", line 358, in main model.train() File "main.py", line 240, in train self.input_setup() File "main.py", line 65, in input_setup _, image_file_A = image_reader.read(filename_queue_A) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/ops/io_ops.py", line 194, in read return gen_io_ops._reader_read_v2(self._reader_ref, queue_ref, name=name) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 654, in _reader_read_v2 queue_handle=queue_handle, name=name) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 789, in _apply_op_helper op_def=op_def) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3018, in create_op op_def=op_def) File "/home/lthpc/virtual_tf1.3_python2.7/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1576, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): FIFOQueue '_0_input_producer' is closed and has insufficient elements (requested 1, current size 0) [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](WholeFileReaderV2, input_producer)]] [[Node: DecodeJpeg/_21 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_DecodeJpeg", tensor_type=DT_UINT8, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

anqier0468 avatar Oct 12 '17 13:10 anqier0468

How did you solve this problem?

taoyunuo avatar Oct 23 '17 08:10 taoyunuo

Same promblem here!!!! Can anyone shed some light on this??? Thanks in advance.

dinggd avatar Dec 08 '17 03:12 dinggd

Oh, I think I get the same problem too. Can anyone else sovle this problem?

Mikoto10032 avatar May 03 '18 04:05 Mikoto10032

Hello, has anyone found a way around this yet? I am facing the same issue.

angad94-14 avatar Oct 03 '19 03:10 angad94-14

@angad94-14

main.py -> 247 and 330 init = tf.global_variables_initializer() to init = (tf.global_variables_initializer(),tf.local_variables_initializer());

YangYongNan avatar Oct 29 '19 12:10 YangYongNan