CycleGAN icon indicating copy to clipboard operation
CycleGAN copied to clipboard

hi, i ran the model for 100 epochs ,but the result is not good like your ,it very poorly,code no changed

Open debin168 opened this issue 8 years ago • 7 comments

can you give me a suggestion

debin168 avatar Oct 30 '17 01:10 debin168

@debin168 Hello! I got some problem in running this code , It comes to the error that FailedPreconditionError (see above for traceback): Attempting to use uninitialized value matching_filenames at "num_files_A = sess.run(self.queue_length_A)"

Mikoto10032 avatar May 03 '18 04:05 Mikoto10032

@debin168 Can you tell me how to solve it? Thanks in advanced.

Mikoto10032 avatar May 03 '18 04:05 Mikoto10032

@Mikoto10032 add tf.local_variables_initializer() to train() and test(): init = tf.global_variables_initializer() -> init = [tf.global_variables_initializer(), tf.local_variables_initializer()]

jiawei-mo avatar May 30 '18 21:05 jiawei-mo

@jiawei-mo Thank you !

Mikoto10032 avatar Jun 01 '18 11:06 Mikoto10032

Even I am getting poor results: Original Input: Generated output:

ArkaJU avatar Oct 14 '18 12:10 ArkaJU

I think the model is collapsed. you need to stop your model and running again. GAN is notoriously difficult to train.

Auth0rM0rgan avatar Nov 07 '18 15:11 Auth0rM0rgan

can you help me to solve this problem ??? $ python main.py WARNING:tensorflow:From main.py:61: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:276: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:188: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs). WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: QueueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. WARNING:tensorflow:From main.py:64: WholeFileReader.init (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.map(tf.read_file). Model/g_A/c1/Conv/weights:0 Model/g_A/c1/Conv/biases:0 Model/g_A/c1/instance_norm/scale:0 Model/g_A/c1/instance_norm/offset:0 Model/g_A/c2/Conv/weights:0 Model/g_A/c2/Conv/biases:0 Model/g_A/c2/instance_norm/scale:0 Model/g_A/c2/instance_norm/offset:0 Model/g_A/c3/Conv/weights:0 Model/g_A/c3/Conv/biases:0 Model/g_A/c3/instance_norm/scale:0 Model/g_A/c3/instance_norm/offset:0 Model/g_A/r1/c1/Conv/weights:0 Model/g_A/r1/c1/Conv/biases:0 Model/g_A/r1/c1/instance_norm/scale:0 Model/g_A/r1/c1/instance_norm/offset:0 Model/g_A/r1/c2/Conv/weights:0 Model/g_A/r1/c2/Conv/biases:0 Model/g_A/r1/c2/instance_norm/scale:0 Model/g_A/r1/c2/instance_norm/offset:0 Model/g_A/r2/c1/Conv/weights:0 Model/g_A/r2/c1/Conv/biases:0 Model/g_A/r2/c1/instance_norm/scale:0 Model/g_A/r2/c1/instance_norm/offset:0 Model/g_A/r2/c2/Conv/weights:0 Model/g_A/r2/c2/Conv/biases:0 Model/g_A/r2/c2/instance_norm/scale:0 Model/g_A/r2/c2/instance_norm/offset:0 Model/g_A/r3/c1/Conv/weights:0 Model/g_A/r3/c1/Conv/biases:0 Model/g_A/r3/c1/instance_norm/scale:0 Model/g_A/r3/c1/instance_norm/offset:0 Model/g_A/r3/c2/Conv/weights:0 Model/g_A/r3/c2/Conv/biases:0 Model/g_A/r3/c2/instance_norm/scale:0 Model/g_A/r3/c2/instance_norm/offset:0 Model/g_A/r4/c1/Conv/weights:0 Model/g_A/r4/c1/Conv/biases:0 Model/g_A/r4/c1/instance_norm/scale:0 Model/g_A/r4/c1/instance_norm/offset:0 Model/g_A/r4/c2/Conv/weights:0 Model/g_A/r4/c2/Conv/biases:0 Model/g_A/r4/c2/instance_norm/scale:0 Model/g_A/r4/c2/instance_norm/offset:0 Model/g_A/r5/c1/Conv/weights:0 Model/g_A/r5/c1/Conv/biases:0 Model/g_A/r5/c1/instance_norm/scale:0 Model/g_A/r5/c1/instance_norm/offset:0 Model/g_A/r5/c2/Conv/weights:0 Model/g_A/r5/c2/Conv/biases:0 Model/g_A/r5/c2/instance_norm/scale:0 Model/g_A/r5/c2/instance_norm/offset:0 Model/g_A/r6/c1/Conv/weights:0 Model/g_A/r6/c1/Conv/biases:0 Model/g_A/r6/c1/instance_norm/scale:0 Model/g_A/r6/c1/instance_norm/offset:0 Model/g_A/r6/c2/Conv/weights:0 Model/g_A/r6/c2/Conv/biases:0 Model/g_A/r6/c2/instance_norm/scale:0 Model/g_A/r6/c2/instance_norm/offset:0 Model/g_A/r7/c1/Conv/weights:0 Model/g_A/r7/c1/Conv/biases:0 Model/g_A/r7/c1/instance_norm/scale:0 Model/g_A/r7/c1/instance_norm/offset:0 Model/g_A/r7/c2/Conv/weights:0 Model/g_A/r7/c2/Conv/biases:0 Model/g_A/r7/c2/instance_norm/scale:0 Model/g_A/r7/c2/instance_norm/offset:0 Model/g_A/r8/c1/Conv/weights:0 Model/g_A/r8/c1/Conv/biases:0 Model/g_A/r8/c1/instance_norm/scale:0 Model/g_A/r8/c1/instance_norm/offset:0 Model/g_A/r8/c2/Conv/weights:0 Model/g_A/r8/c2/Conv/biases:0 Model/g_A/r8/c2/instance_norm/scale:0 Model/g_A/r8/c2/instance_norm/offset:0 Model/g_A/r9/c1/Conv/weights:0 Model/g_A/r9/c1/Conv/biases:0 Model/g_A/r9/c1/instance_norm/scale:0 Model/g_A/r9/c1/instance_norm/offset:0 Model/g_A/r9/c2/Conv/weights:0 Model/g_A/r9/c2/Conv/biases:0 Model/g_A/r9/c2/instance_norm/scale:0 Model/g_A/r9/c2/instance_norm/offset:0 Model/g_A/c4/Conv2d_transpose/weights:0 Model/g_A/c4/Conv2d_transpose/biases:0 Model/g_A/c4/instance_norm/scale:0 Model/g_A/c4/instance_norm/offset:0 Model/g_A/c5/Conv2d_transpose/weights:0 Model/g_A/c5/Conv2d_transpose/biases:0 Model/g_A/c5/instance_norm/scale:0 Model/g_A/c5/instance_norm/offset:0 Model/g_A/c6/Conv/weights:0 Model/g_A/c6/Conv/biases:0 Model/g_A/c6/instance_norm/scale:0 Model/g_A/c6/instance_norm/offset:0 Model/g_B/c1/Conv/weights:0 Model/g_B/c1/Conv/biases:0 Model/g_B/c1/instance_norm/scale:0 Model/g_B/c1/instance_norm/offset:0 Model/g_B/c2/Conv/weights:0 Model/g_B/c2/Conv/biases:0 Model/g_B/c2/instance_norm/scale:0 Model/g_B/c2/instance_norm/offset:0 Model/g_B/c3/Conv/weights:0 Model/g_B/c3/Conv/biases:0 Model/g_B/c3/instance_norm/scale:0 Model/g_B/c3/instance_norm/offset:0 Model/g_B/r1/c1/Conv/weights:0 Model/g_B/r1/c1/Conv/biases:0 Model/g_B/r1/c1/instance_norm/scale:0 Model/g_B/r1/c1/instance_norm/offset:0 Model/g_B/r1/c2/Conv/weights:0 Model/g_B/r1/c2/Conv/biases:0 Model/g_B/r1/c2/instance_norm/scale:0 Model/g_B/r1/c2/instance_norm/offset:0 Model/g_B/r2/c1/Conv/weights:0 Model/g_B/r2/c1/Conv/biases:0 Model/g_B/r2/c1/instance_norm/scale:0 Model/g_B/r2/c1/instance_norm/offset:0 Model/g_B/r2/c2/Conv/weights:0 Model/g_B/r2/c2/Conv/biases:0 Model/g_B/r2/c2/instance_norm/scale:0 Model/g_B/r2/c2/instance_norm/offset:0 Model/g_B/r3/c1/Conv/weights:0 Model/g_B/r3/c1/Conv/biases:0 Model/g_B/r3/c1/instance_norm/scale:0 Model/g_B/r3/c1/instance_norm/offset:0 Model/g_B/r3/c2/Conv/weights:0 Model/g_B/r3/c2/Conv/biases:0 Model/g_B/r3/c2/instance_norm/scale:0 Model/g_B/r3/c2/instance_norm/offset:0 Model/g_B/r4/c1/Conv/weights:0 Model/g_B/r4/c1/Conv/biases:0 Model/g_B/r4/c1/instance_norm/scale:0 Model/g_B/r4/c1/instance_norm/offset:0 Model/g_B/r4/c2/Conv/weights:0 Model/g_B/r4/c2/Conv/biases:0 Model/g_B/r4/c2/instance_norm/scale:0 Model/g_B/r4/c2/instance_norm/offset:0 Model/g_B/r5/c1/Conv/weights:0 Model/g_B/r5/c1/Conv/biases:0 Model/g_B/r5/c1/instance_norm/scale:0 Model/g_B/r5/c1/instance_norm/offset:0 Model/g_B/r5/c2/Conv/weights:0 Model/g_B/r5/c2/Conv/biases:0 Model/g_B/r5/c2/instance_norm/scale:0 Model/g_B/r5/c2/instance_norm/offset:0 Model/g_B/r6/c1/Conv/weights:0 Model/g_B/r6/c1/Conv/biases:0 Model/g_B/r6/c1/instance_norm/scale:0 Model/g_B/r6/c1/instance_norm/offset:0 Model/g_B/r6/c2/Conv/weights:0 Model/g_B/r6/c2/Conv/biases:0 Model/g_B/r6/c2/instance_norm/scale:0 Model/g_B/r6/c2/instance_norm/offset:0 Model/g_B/r7/c1/Conv/weights:0 Model/g_B/r7/c1/Conv/biases:0 Model/g_B/r7/c1/instance_norm/scale:0 Model/g_B/r7/c1/instance_norm/offset:0 Model/g_B/r7/c2/Conv/weights:0 Model/g_B/r7/c2/Conv/biases:0 Model/g_B/r7/c2/instance_norm/scale:0 Model/g_B/r7/c2/instance_norm/offset:0 Model/g_B/r8/c1/Conv/weights:0 Model/g_B/r8/c1/Conv/biases:0 Model/g_B/r8/c1/instance_norm/scale:0 Model/g_B/r8/c1/instance_norm/offset:0 Model/g_B/r8/c2/Conv/weights:0 Model/g_B/r8/c2/Conv/biases:0 Model/g_B/r8/c2/instance_norm/scale:0 Model/g_B/r8/c2/instance_norm/offset:0 Model/g_B/r9/c1/Conv/weights:0 Model/g_B/r9/c1/Conv/biases:0 Model/g_B/r9/c1/instance_norm/scale:0 Model/g_B/r9/c1/instance_norm/offset:0 Model/g_B/r9/c2/Conv/weights:0 Model/g_B/r9/c2/Conv/biases:0 Model/g_B/r9/c2/instance_norm/scale:0 Model/g_B/r9/c2/instance_norm/offset:0 Model/g_B/c4/Conv2d_transpose/weights:0 Model/g_B/c4/Conv2d_transpose/biases:0 Model/g_B/c4/instance_norm/scale:0 Model/g_B/c4/instance_norm/offset:0 Model/g_B/c5/Conv2d_transpose/weights:0 Model/g_B/c5/Conv2d_transpose/biases:0 Model/g_B/c5/instance_norm/scale:0 Model/g_B/c5/instance_norm/offset:0 Model/g_B/c6/Conv/weights:0 Model/g_B/c6/Conv/biases:0 Model/g_B/c6/instance_norm/scale:0 Model/g_B/c6/instance_norm/offset:0 Model/d_A/c1/Conv/weights:0 Model/d_A/c1/Conv/biases:0 Model/d_A/c2/Conv/weights:0 Model/d_A/c2/Conv/biases:0 Model/d_A/c2/instance_norm/scale:0 Model/d_A/c2/instance_norm/offset:0 Model/d_A/c3/Conv/weights:0 Model/d_A/c3/Conv/biases:0 Model/d_A/c3/instance_norm/scale:0 Model/d_A/c3/instance_norm/offset:0 Model/d_A/c4/Conv/weights:0 Model/d_A/c4/Conv/biases:0 Model/d_A/c4/instance_norm/scale:0 Model/d_A/c4/instance_norm/offset:0 Model/d_A/c5/Conv/weights:0 Model/d_A/c5/Conv/biases:0 Model/d_B/c1/Conv/weights:0 Model/d_B/c1/Conv/biases:0 Model/d_B/c2/Conv/weights:0 Model/d_B/c2/Conv/biases:0 Model/d_B/c2/instance_norm/scale:0 Model/d_B/c2/instance_norm/offset:0 Model/d_B/c3/Conv/weights:0 Model/d_B/c3/Conv/biases:0 Model/d_B/c3/instance_norm/scale:0 Model/d_B/c3/instance_norm/offset:0 Model/d_B/c4/Conv/weights:0 Model/d_B/c4/Conv/biases:0 Model/d_B/c4/instance_norm/scale:0 Model/d_B/c4/instance_norm/offset:0 Model/d_B/c5/Conv/weights:0 Model/d_B/c5/Conv/biases:0 2019-10-12 07:27:33.012881: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-10-12 07:27:33.114601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-12 07:27:33.115054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: Quadro P6000 major: 6 minor: 1 memoryClockRate(GHz): 1.645 pciBusID: 0000:01:00.0 totalMemory: 23.88GiB freeMemory: 21.16GiB 2019-10-12 07:27:33.115067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-10-12 07:27:33.648991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-10-12 07:27:33.649017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-10-12 07:27:33.649021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-10-12 07:27:33.649307: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 20519 MB memory) -> physical GPU (device: 0, name: Quadro P6000, pci bus id: 0000:01:00.0, compute capability: 6.1) WARNING:tensorflow:From main.py:85: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. Traceback (most recent call last): File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value matching_filenames [[{{node matching_filenames/read}} = IdentityT=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 362, in main() File "main.py", line 356, in main model.train() File "main.py", line 254, in train self.input_read(sess) File "main.py", line 87, in input_read num_files_A = sess.run(self.queue_length_A) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run run_metadata) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value matching_filenames [[node matching_filenames/read (defined at main.py:56) = IdentityT=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'matching_filenames/read', defined at: File "main.py", line 362, in main() File "main.py", line 356, in main model.train() File "main.py", line 238, in train self.input_setup() File "main.py", line 56, in input_setup filenames_A = tf.train.match_filenames_once("/home/hala/CDGAN/CycleGAN_Code/datasets/dirty/*.png") File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py", line 77, in match_filenames_once collections=[ops.GraphKeys.LOCAL_VARIABLES]) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 183, in call return cls._variable_v1_call(*args, **kwargs) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call aggregation=aggregation) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 125, in previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator expected_shape=expected_shape, import_scope=import_scope) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 187, in call return super(VariableMetaclass, cls).call(*args, **kwargs) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 1329, in init constraint=constraint) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 1491, in _init_from_args self._snapshot = array_ops.identity(self._variable, name="read") File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 81, in identity return gen_array_ops.identity(input, name=name) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3454, in identity "Identity", input=input, name=name) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op op_def=op_def) File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1770, in init self._traceback = tf_stack.extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value matching_filenames [[node matching_filenames/read (defined at main.py:56) = IdentityT=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

hala3 avatar Oct 12 '19 05:10 hala3