tacotron icon indicating copy to clipboard operation
tacotron copied to clipboard

为啥我总有很多错误

Open MasterN05 opened this issue 5 years ago • 8 comments

1.python3的版本号有确定是3.x ? 2.我用conda 工具从tensorflow-gpu1.3.0 尝试到tensorflow-gpu1.12.0 都有问题 3.能不能给下你的版本信息 我checkout的是https://github.com/begeekmyfriend/tacotron 的mandarin 分支 @[email protected]

MasterN05 avatar Mar 30 '19 05:03 MasterN05

错误信息 python版本3.6.7 tensorflow版本1.10.0

====== Loading checkpoint: /tmp/tacotron-20180906/model.ckpt Traceback (most recent call last): File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call return fn(*args) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1725, in restore {self.saver_def.filename_tensor_name: save_path}) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 877, in run run_metadata_ptr) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1272, in _do_run run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at: File "demo_server.py", line 91, in synthesizer.load(args.checkpoint) File "/root/tacotron/synthesizer.py", line 24, in load saver = tf.train.Saver() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1281, in init self.build() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1293, in build self._build(self._filename, build_save=True, build_restore=True) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1330, in _build build_save=build_save, build_restore=build_restore) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 778, in _build_internal restore_sequentially, reshape) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 397, in _AddRestoreOps restore_sequentially) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 829, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2 shape_and_slices=shape_and_slices, dtypes=dtypes, name=name) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func return func(*args, **kwargs) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1717, in init self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1737, in restore checkpointable.OBJECT_GRAPH_PROTO_KEY) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 351, in get_tensor status) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "demo_server.py", line 91, in synthesizer.load(args.checkpoint) File "/root/tacotron/synthesizer.py", line 25, in load saver.restore(self.session, checkpoint_path) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1743, in restore err, "a Variable name or other graph key that is missing") tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at: File "demo_server.py", line 91, in synthesizer.load(args.checkpoint) File "/root/tacotron/synthesizer.py", line 24, in load saver = tf.train.Saver() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1281, in init self.build() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1293, in build self._build(self._filename, build_save=True, build_restore=True) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1330, in _build build_save=build_save, build_restore=build_restore) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 778, in _build_internal restore_sequentially, reshape) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 397, in _AddRestoreOps restore_sequentially) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 829, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2 shape_and_slices=shape_and_slices, dtypes=dtypes, name=name) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func return func(*args, **kwargs) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1717, in init self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

MasterN05 avatar Mar 30 '19 05:03 MasterN05

我执行的是python3 demo_server.py --checkpoint /tmp/tacotron-20180906/model.ckpt python版本3.6.7 tensorflow-gpu版本1.8.0 cudatoolkit 8.0
cudnn 6.0.21

Loading checkpoint: /tmp/tacotron-20170720/model.ckpt Traceback (most recent call last): File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "demo_server.py", line 91, in synthesizer.load(args.checkpoint) File "/root/tacotron/synthesizer.py", line 25, in load saver.restore(self.session, checkpoint_path) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1802, in restore {self.saver_def.filename_tensor_name: save_path}) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at: File "demo_server.py", line 91, in synthesizer.load(args.checkpoint) File "/root/tacotron/synthesizer.py", line 24, in load saver = tf.train.Saver() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1338, in init self.build() File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1347, in build self._build(self._filename, build_save=True, build_restore=True) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1384, in _build build_save=build_save, build_restore=build_restore) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 835, in _build_internal restore_sequentially, reshape) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 472, in _AddRestoreOps restore_sequentially) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 886, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2 shape_and_slices=shape_and_slices, dtypes=dtypes, name=name) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op op_def=op_def) File "/root/anaconda3/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

NotFoundError (see above for traceback): Key model/inference/decoder/Location_Sensitive_Attention/attention_bias not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

MasterN05 avatar Mar 30 '19 06:03 MasterN05

Please train from scratch

begeekmyfriend avatar Mar 30 '19 07:03 begeekmyfriend

还是上面那个环境 我执行python3 preprocess.py --dataset thchs30 报错

(tensorflow) root@udows-Z9PA-U8-Series-Invalid-entry-length-16-Fixed-up-to-11:~/tacotron# python3 preprocess.py --dataset thchs30 0it [00:00, ?it/s] Wrote 0 utterances, 0 frames (0.00 hours) Traceback (most recent call last): File "preprocess.py", line 60, in main() File "preprocess.py", line 56, in main preprocess_thchs30(args) File "preprocess.py", line 30, in preprocess_thchs30 write_metadata(metadata, out_dir) File "preprocess.py", line 40, in write_metadata print('Max input length: %d' % max(len(m[3]) for m in metadata)) ValueError: max() arg is an empty sequence

MasterN05 avatar Mar 30 '19 08:03 MasterN05

忽略报错执行 python3 train.py --name thchs30

(tensorflow) root@udows-Z9PA-U8-Series-Invalid-entry-length-16-Fixed-up-to-11:~/tacotron# python3 train.py --name thchs30 Checkpoint path: ./logs-thchs30/model.ckpt Loading training data from: ./training/train.txt Using model: tacotron Hyperparameters: adam_beta1: 0.9 adam_beta2: 0.999 attention_depth: 128 batch_size: 32 cleaners: basic_cleaners decay_learning_rate: True decoder_depth: 1024 embed_depth: 512 encoder_depth: 256 fmax: 7600 fmin: 125 frame_length_ms: 50 frame_shift_ms: 12.5 griffin_lim_iters: 60 initial_learning_rate: 0.001 max_abs_value: 4 max_frame_num: 1000 max_iters: 300 min_level_db: -100 num_freq: 2049 num_mels: 80 outputs_per_step: 5 postnet_depth: 512 power: 1.2 preemphasis: 0.97 prenet_depths: [256, 256] ref_level_db: 20 sample_rate: 48000 use_cmudict: False Loaded metadata for 0 examples (0.00 hours) Initialized Tacotron model. Dimensions: embedding: (?, ?, 512) prenet out: (?, ?, 256) encoder out: (?, ?, 256) decoder out (r frames): (?, ?, 400) decoder out (1 frame): (?, ?, 80) postnet out: (?, ?, 512) linear out: (?, ?, 2049) stop token: (?, ?) Traceback (most recent call last): File "/root/tacotron/datasets/datafeeder.py", line 77, in run self._enqueue_next_group() File "/root/tacotron/datasets/datafeeder.py", line 89, in _enqueue_next_group examples = [self._get_next_example() for i in range(n * _batches_per_group)] File "/root/tacotron/datasets/datafeeder.py", line 89, in examples = [self._get_next_example() for i in range(n * _batches_per_group)] File "/root/tacotron/datasets/datafeeder.py", line 107, in _get_next_example meta = self._metadata[self._offset] IndexError: list index out of range

MasterN05 avatar Mar 30 '19 09:03 MasterN05

有人能帮我解释下么

MasterN05 avatar Apr 01 '19 01:04 MasterN05

I got this error too.

ccl-private avatar May 29 '19 03:05 ccl-private

Please check out the path in this line

begeekmyfriend avatar May 29 '19 06:05 begeekmyfriend