CycleGAN
CycleGAN copied to clipboard
hi, i ran the model for 100 epochs ,but the result is not good like your ,it very poorly,code no changed
can you give me a suggestion
@debin168 Hello! I got some problem in running this code , It comes to the error that FailedPreconditionError (see above for traceback): Attempting to use uninitialized value matching_filenames at "num_files_A = sess.run(self.queue_length_A)"
@debin168 Can you tell me how to solve it? Thanks in advanced.
@Mikoto10032 add tf.local_variables_initializer() to train() and test():
init = tf.global_variables_initializer() ->
init = [tf.global_variables_initializer(), tf.local_variables_initializer()]
@jiawei-mo Thank you !
Even I am getting poor results:
Original Input:
Generated output:

I think the model is collapsed. you need to stop your model and running again. GAN is notoriously difficult to train.
can you help me to solve this problem ???
$ python main.py
WARNING:tensorflow:From main.py:61: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:276: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:188: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: QueueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From /home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From main.py:64: WholeFileReader.init (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.map(tf.read_file).
Model/g_A/c1/Conv/weights:0
Model/g_A/c1/Conv/biases:0
Model/g_A/c1/instance_norm/scale:0
Model/g_A/c1/instance_norm/offset:0
Model/g_A/c2/Conv/weights:0
Model/g_A/c2/Conv/biases:0
Model/g_A/c2/instance_norm/scale:0
Model/g_A/c2/instance_norm/offset:0
Model/g_A/c3/Conv/weights:0
Model/g_A/c3/Conv/biases:0
Model/g_A/c3/instance_norm/scale:0
Model/g_A/c3/instance_norm/offset:0
Model/g_A/r1/c1/Conv/weights:0
Model/g_A/r1/c1/Conv/biases:0
Model/g_A/r1/c1/instance_norm/scale:0
Model/g_A/r1/c1/instance_norm/offset:0
Model/g_A/r1/c2/Conv/weights:0
Model/g_A/r1/c2/Conv/biases:0
Model/g_A/r1/c2/instance_norm/scale:0
Model/g_A/r1/c2/instance_norm/offset:0
Model/g_A/r2/c1/Conv/weights:0
Model/g_A/r2/c1/Conv/biases:0
Model/g_A/r2/c1/instance_norm/scale:0
Model/g_A/r2/c1/instance_norm/offset:0
Model/g_A/r2/c2/Conv/weights:0
Model/g_A/r2/c2/Conv/biases:0
Model/g_A/r2/c2/instance_norm/scale:0
Model/g_A/r2/c2/instance_norm/offset:0
Model/g_A/r3/c1/Conv/weights:0
Model/g_A/r3/c1/Conv/biases:0
Model/g_A/r3/c1/instance_norm/scale:0
Model/g_A/r3/c1/instance_norm/offset:0
Model/g_A/r3/c2/Conv/weights:0
Model/g_A/r3/c2/Conv/biases:0
Model/g_A/r3/c2/instance_norm/scale:0
Model/g_A/r3/c2/instance_norm/offset:0
Model/g_A/r4/c1/Conv/weights:0
Model/g_A/r4/c1/Conv/biases:0
Model/g_A/r4/c1/instance_norm/scale:0
Model/g_A/r4/c1/instance_norm/offset:0
Model/g_A/r4/c2/Conv/weights:0
Model/g_A/r4/c2/Conv/biases:0
Model/g_A/r4/c2/instance_norm/scale:0
Model/g_A/r4/c2/instance_norm/offset:0
Model/g_A/r5/c1/Conv/weights:0
Model/g_A/r5/c1/Conv/biases:0
Model/g_A/r5/c1/instance_norm/scale:0
Model/g_A/r5/c1/instance_norm/offset:0
Model/g_A/r5/c2/Conv/weights:0
Model/g_A/r5/c2/Conv/biases:0
Model/g_A/r5/c2/instance_norm/scale:0
Model/g_A/r5/c2/instance_norm/offset:0
Model/g_A/r6/c1/Conv/weights:0
Model/g_A/r6/c1/Conv/biases:0
Model/g_A/r6/c1/instance_norm/scale:0
Model/g_A/r6/c1/instance_norm/offset:0
Model/g_A/r6/c2/Conv/weights:0
Model/g_A/r6/c2/Conv/biases:0
Model/g_A/r6/c2/instance_norm/scale:0
Model/g_A/r6/c2/instance_norm/offset:0
Model/g_A/r7/c1/Conv/weights:0
Model/g_A/r7/c1/Conv/biases:0
Model/g_A/r7/c1/instance_norm/scale:0
Model/g_A/r7/c1/instance_norm/offset:0
Model/g_A/r7/c2/Conv/weights:0
Model/g_A/r7/c2/Conv/biases:0
Model/g_A/r7/c2/instance_norm/scale:0
Model/g_A/r7/c2/instance_norm/offset:0
Model/g_A/r8/c1/Conv/weights:0
Model/g_A/r8/c1/Conv/biases:0
Model/g_A/r8/c1/instance_norm/scale:0
Model/g_A/r8/c1/instance_norm/offset:0
Model/g_A/r8/c2/Conv/weights:0
Model/g_A/r8/c2/Conv/biases:0
Model/g_A/r8/c2/instance_norm/scale:0
Model/g_A/r8/c2/instance_norm/offset:0
Model/g_A/r9/c1/Conv/weights:0
Model/g_A/r9/c1/Conv/biases:0
Model/g_A/r9/c1/instance_norm/scale:0
Model/g_A/r9/c1/instance_norm/offset:0
Model/g_A/r9/c2/Conv/weights:0
Model/g_A/r9/c2/Conv/biases:0
Model/g_A/r9/c2/instance_norm/scale:0
Model/g_A/r9/c2/instance_norm/offset:0
Model/g_A/c4/Conv2d_transpose/weights:0
Model/g_A/c4/Conv2d_transpose/biases:0
Model/g_A/c4/instance_norm/scale:0
Model/g_A/c4/instance_norm/offset:0
Model/g_A/c5/Conv2d_transpose/weights:0
Model/g_A/c5/Conv2d_transpose/biases:0
Model/g_A/c5/instance_norm/scale:0
Model/g_A/c5/instance_norm/offset:0
Model/g_A/c6/Conv/weights:0
Model/g_A/c6/Conv/biases:0
Model/g_A/c6/instance_norm/scale:0
Model/g_A/c6/instance_norm/offset:0
Model/g_B/c1/Conv/weights:0
Model/g_B/c1/Conv/biases:0
Model/g_B/c1/instance_norm/scale:0
Model/g_B/c1/instance_norm/offset:0
Model/g_B/c2/Conv/weights:0
Model/g_B/c2/Conv/biases:0
Model/g_B/c2/instance_norm/scale:0
Model/g_B/c2/instance_norm/offset:0
Model/g_B/c3/Conv/weights:0
Model/g_B/c3/Conv/biases:0
Model/g_B/c3/instance_norm/scale:0
Model/g_B/c3/instance_norm/offset:0
Model/g_B/r1/c1/Conv/weights:0
Model/g_B/r1/c1/Conv/biases:0
Model/g_B/r1/c1/instance_norm/scale:0
Model/g_B/r1/c1/instance_norm/offset:0
Model/g_B/r1/c2/Conv/weights:0
Model/g_B/r1/c2/Conv/biases:0
Model/g_B/r1/c2/instance_norm/scale:0
Model/g_B/r1/c2/instance_norm/offset:0
Model/g_B/r2/c1/Conv/weights:0
Model/g_B/r2/c1/Conv/biases:0
Model/g_B/r2/c1/instance_norm/scale:0
Model/g_B/r2/c1/instance_norm/offset:0
Model/g_B/r2/c2/Conv/weights:0
Model/g_B/r2/c2/Conv/biases:0
Model/g_B/r2/c2/instance_norm/scale:0
Model/g_B/r2/c2/instance_norm/offset:0
Model/g_B/r3/c1/Conv/weights:0
Model/g_B/r3/c1/Conv/biases:0
Model/g_B/r3/c1/instance_norm/scale:0
Model/g_B/r3/c1/instance_norm/offset:0
Model/g_B/r3/c2/Conv/weights:0
Model/g_B/r3/c2/Conv/biases:0
Model/g_B/r3/c2/instance_norm/scale:0
Model/g_B/r3/c2/instance_norm/offset:0
Model/g_B/r4/c1/Conv/weights:0
Model/g_B/r4/c1/Conv/biases:0
Model/g_B/r4/c1/instance_norm/scale:0
Model/g_B/r4/c1/instance_norm/offset:0
Model/g_B/r4/c2/Conv/weights:0
Model/g_B/r4/c2/Conv/biases:0
Model/g_B/r4/c2/instance_norm/scale:0
Model/g_B/r4/c2/instance_norm/offset:0
Model/g_B/r5/c1/Conv/weights:0
Model/g_B/r5/c1/Conv/biases:0
Model/g_B/r5/c1/instance_norm/scale:0
Model/g_B/r5/c1/instance_norm/offset:0
Model/g_B/r5/c2/Conv/weights:0
Model/g_B/r5/c2/Conv/biases:0
Model/g_B/r5/c2/instance_norm/scale:0
Model/g_B/r5/c2/instance_norm/offset:0
Model/g_B/r6/c1/Conv/weights:0
Model/g_B/r6/c1/Conv/biases:0
Model/g_B/r6/c1/instance_norm/scale:0
Model/g_B/r6/c1/instance_norm/offset:0
Model/g_B/r6/c2/Conv/weights:0
Model/g_B/r6/c2/Conv/biases:0
Model/g_B/r6/c2/instance_norm/scale:0
Model/g_B/r6/c2/instance_norm/offset:0
Model/g_B/r7/c1/Conv/weights:0
Model/g_B/r7/c1/Conv/biases:0
Model/g_B/r7/c1/instance_norm/scale:0
Model/g_B/r7/c1/instance_norm/offset:0
Model/g_B/r7/c2/Conv/weights:0
Model/g_B/r7/c2/Conv/biases:0
Model/g_B/r7/c2/instance_norm/scale:0
Model/g_B/r7/c2/instance_norm/offset:0
Model/g_B/r8/c1/Conv/weights:0
Model/g_B/r8/c1/Conv/biases:0
Model/g_B/r8/c1/instance_norm/scale:0
Model/g_B/r8/c1/instance_norm/offset:0
Model/g_B/r8/c2/Conv/weights:0
Model/g_B/r8/c2/Conv/biases:0
Model/g_B/r8/c2/instance_norm/scale:0
Model/g_B/r8/c2/instance_norm/offset:0
Model/g_B/r9/c1/Conv/weights:0
Model/g_B/r9/c1/Conv/biases:0
Model/g_B/r9/c1/instance_norm/scale:0
Model/g_B/r9/c1/instance_norm/offset:0
Model/g_B/r9/c2/Conv/weights:0
Model/g_B/r9/c2/Conv/biases:0
Model/g_B/r9/c2/instance_norm/scale:0
Model/g_B/r9/c2/instance_norm/offset:0
Model/g_B/c4/Conv2d_transpose/weights:0
Model/g_B/c4/Conv2d_transpose/biases:0
Model/g_B/c4/instance_norm/scale:0
Model/g_B/c4/instance_norm/offset:0
Model/g_B/c5/Conv2d_transpose/weights:0
Model/g_B/c5/Conv2d_transpose/biases:0
Model/g_B/c5/instance_norm/scale:0
Model/g_B/c5/instance_norm/offset:0
Model/g_B/c6/Conv/weights:0
Model/g_B/c6/Conv/biases:0
Model/g_B/c6/instance_norm/scale:0
Model/g_B/c6/instance_norm/offset:0
Model/d_A/c1/Conv/weights:0
Model/d_A/c1/Conv/biases:0
Model/d_A/c2/Conv/weights:0
Model/d_A/c2/Conv/biases:0
Model/d_A/c2/instance_norm/scale:0
Model/d_A/c2/instance_norm/offset:0
Model/d_A/c3/Conv/weights:0
Model/d_A/c3/Conv/biases:0
Model/d_A/c3/instance_norm/scale:0
Model/d_A/c3/instance_norm/offset:0
Model/d_A/c4/Conv/weights:0
Model/d_A/c4/Conv/biases:0
Model/d_A/c4/instance_norm/scale:0
Model/d_A/c4/instance_norm/offset:0
Model/d_A/c5/Conv/weights:0
Model/d_A/c5/Conv/biases:0
Model/d_B/c1/Conv/weights:0
Model/d_B/c1/Conv/biases:0
Model/d_B/c2/Conv/weights:0
Model/d_B/c2/Conv/biases:0
Model/d_B/c2/instance_norm/scale:0
Model/d_B/c2/instance_norm/offset:0
Model/d_B/c3/Conv/weights:0
Model/d_B/c3/Conv/biases:0
Model/d_B/c3/instance_norm/scale:0
Model/d_B/c3/instance_norm/offset:0
Model/d_B/c4/Conv/weights:0
Model/d_B/c4/Conv/biases:0
Model/d_B/c4/instance_norm/scale:0
Model/d_B/c4/instance_norm/offset:0
Model/d_B/c5/Conv/weights:0
Model/d_B/c5/Conv/biases:0
2019-10-12 07:27:33.012881: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-12 07:27:33.114601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-10-12 07:27:33.115054: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Quadro P6000 major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
totalMemory: 23.88GiB freeMemory: 21.16GiB
2019-10-12 07:27:33.115067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-10-12 07:27:33.648991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-12 07:27:33.649017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-10-12 07:27:33.649021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-10-12 07:27:33.649307: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 20519 MB memory) -> physical GPU (device: 0, name: Quadro P6000, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From main.py:85: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
Traceback (most recent call last):
File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/hala/anaconda3/envs/py35gpu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value matching_filenames
[[{{node matching_filenames/read}} = IdentityT=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 362, in
Caused by op 'matching_filenames/read', defined at:
File "main.py", line 362, in
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value matching_filenames [[node matching_filenames/read (defined at main.py:56) = IdentityT=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]