Automatic_Speech_Recognition
Automatic_Speech_Recognition copied to clipboard
遇到许多问题
我是个初学者,所以可能遇到很多问题不会解决,目前遇到过的有: 1.在导入模块那里无法导入core_rnn,以及impl,我把这些话全都注释了才得以继续 2.return data_lists_to_batches([np.load(os.path.join(mfccPath, fn)) for fn in os.listdir(mfccPath)], OSError: [Errno 2] No such file or directory: '/home/pony/github/data/timit/phn/train/mfcc' 现在卡在这里不知道怎么办
问题可以更新一下,我之前配环境的指令貌似除了问题导致问题一出现,现在已经解决了,目前仍然存在两个问题 1.No module named utils ,我是在centos7下的/home/linuxone/桌面/Automatic_Speech_Recognition-master界面打开终端运行的。 2.我自己手动将utils和module复制粘贴进main文件夹,然后在main里面打开终端运行,仍然会出现之前的问题二,即目录不存在,依然无法解决。 我仔细看了之前的问题,貌似我的问题一出现的不少但是一直无法解决,不知道我手动复制粘贴是否可行,主要是问题二现在导致程序完全无法运行,非常感谢提供的帮助
@Ostnie
-
You should use TensorFlow r1.1 rather than r1.3. TensorFlow moved rnn layers to tf.nn in r1.3.
-
change this folder '/home/pony/github/data/timit/phn/train/mfcc' to some folder you like in your system,
-
Open the folder contains your running script (usually 'main') in the terminal and then run the program.
-
The same as 3.
thankyou for your help,but my problem are still exist. The old problem has ben solved by create the folder ,in a word,it's me that create the floder,maybe i think this work should be done by a script or a program,so i wonder whether it is right for me to create the folder. unfortunately,new problem comes out which is File "/home/linuxone/ASR/ASR/main/utils/utils.py", line 248, in data_lists_to_batches nFeatures = inputList[0].shape[0] IndexError: list index out of range thank you for your help!
我也遇到了同样的问题2,return data_lists_to_batches([np.load(os.path.join(mfccPath, fn)) for fn in os.listdir(mfccPath)], OSError: [Errno 2] No such file or directory: '/home/pony/github/data/timit/phn/train/mfcc'
也同样卡在这儿,请问是怎么解决的。
utils这个问题是作者自己写了一次utils,所以我把它加入到python库中就不会报这个错误了。
我现在也还没有解决你说的这个问题,也很苦恼,问了一些人但是并没有解决,他们说需要自己下载语音数据然后修改一下目录,但是我没有解决,不知道你能否顺着这个想法去尝试
------------------ 原始邮件 ------------------ 发件人: "cdcfyjh";[email protected]; 发送时间: 2017年10月21日(星期六) 下午5:06 收件人: "zzw922cn/Automatic_Speech_Recognition"[email protected]; 抄送: "天下一品"[email protected]; "Mention"[email protected]; 主题: Re: [zzw922cn/Automatic_Speech_Recognition] 遇到许多问题 (#37)
我也遇到了同样的问题2,return data_lists_to_batches([np.load(os.path.join(mfccPath, fn)) for fn in os.listdir(mfccPath)], OSError: [Errno 2] No such file or directory: '/home/pony/github/data/timit/phn/train/mfcc'
也同样卡在这儿,请问是怎么解决的。
utils这个问题是作者自己写了一次utils,所以我把它加入到python库中就不会报这个错误了。
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
不知道你的问题解决了没有?如果解决了也希望能够分享一下方法. 然后我现在知道是需要用feature下面的timit文件夹里面的预处理代码拉力处理数据,但是我用哪个代码总是提示我参数过少,我不知道该如何设置,不知道你有没有好的想法。
------------------ 原始邮件 ------------------ 发件人: "cdcfyjh";[email protected]; 发送时间: 2017年10月21日(星期六) 下午5:06 收件人: "zzw922cn/Automatic_Speech_Recognition"[email protected]; 抄送: "天下一品"[email protected]; "Mention"[email protected]; 主题: Re: [zzw922cn/Automatic_Speech_Recognition] 遇到许多问题 (#37)
我也遇到了同样的问题2,return data_lists_to_batches([np.load(os.path.join(mfccPath, fn)) for fn in os.listdir(mfccPath)], OSError: [Errno 2] No such file or directory: '/home/pony/github/data/timit/phn/train/mfcc'
也同样卡在这儿,请问是怎么解决的。
utils这个问题是作者自己写了一次utils,所以我把它加入到python库中就不会报这个错误了。
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@Ostnie @cdcfyjh To run the program, you should make sure the mfcc files and label files are in mfcc/ and label/ folders, respectively.
@Ostnie Sorry, the code is based on TensorFlow r1.1, and i will keep updates with latest TensorFlow.
@zzw922cn I have a tensorflow1.2.0 in my computer,how can i use tensorflow1.10? I don't want to remove the tensorflow1.20
@zzw922cn in another computer which has tensorflow1.1.0,the problem is like this FailedPreconditionError (see above for traceback): Attempting to use uninitialized value DBRNN_1/fw/basic_lstm_cell/weights [[Node: DBRNN_1/fw/basic_lstm_cell/weights/read = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]
and I make sure the mfcc files and label files are in mfcc/ and label/ folders,so I don't know how to solve the problem
不好意思,我不知道怎么at你。 如果想同时安装tensorflow1.1和1.2,试试虚拟机?我在linux下安装了两个版本的tensorflow,忘记当初是怎么做的了,但是是能做到的。 希望能帮到你。
恩恩,非常感谢,我用修改文件名的方法也做到了,用a的时候改b的名字打乱系统路径让他找不到原先的模块,用完改回来就好了。
------------------ 原始邮件 ------------------ 发件人: "cdcfyjh";[email protected]; 发送时间: 2017年10月26日(星期四) 晚上9:07 收件人: "zzw922cn/Automatic_Speech_Recognition"[email protected]; 抄送: "天下一品"[email protected]; "Mention"[email protected]; 主题: Re: [zzw922cn/Automatic_Speech_Recognition] 遇到许多问题 (#37)
不好意思,我不知道怎么at你。 如果想同时安装tensorflow1.1和1.2,试试虚拟机?我在linux下安装了两个版本的tensorflow,忘记当初是怎么做的了,但是是能做到的。 希望能帮到你。
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@Ostnie you can also try anaconda.
我在网上下载了一个数据集(timit,而且已经处理过了),已经包含了PHN(mfcc),但是没有label。想用作者feature/timit 中的代码来处理数据集,却频繁出现一个错误:
warnings.warn("Could not import alsa backend; most probably, "
Traceback (most recent call last):
File "timit_preprocess.py", line 166, in
这个错误认真看了代码之后,仔细核对了输入的地址,确定这个地址无误。 我的输入:python timit_preprocess.py ../../github/data/timit/phn/train/mfcc/ ../../github/data/timit/ @Ostnie (抱歉还是不会AT) 这里只需要输入两个参数,一个是数据的位置 和处理之后的数据要存的位置,其他参数作者都写好了(Default). 还有,请问在那儿能下载到timit dataset呢?
目前我使用网上下载的数据集会出现一个错误:
train mode...
load_data...
Traceback (most recent call last):
File "main/timit_train.py", line 255, in
难道这个代码是因为没有label导致的错误吗? 仔细看过label部分代码,很像我下载的phn文件的格式。但是,不是npy后缀。
我在进行timit数据处理的时候,会报错from speechvalley.feature.core import check_path_exists,我查了一下,确实找不到check_path_exists啊。有没有知道如何处理timit数据啊,这么好的程序,就是不知道怎么运行,就是因为不知道该如何加载数据,我从网上下载了原始的timit数据
I tried your method and still can't convert the original TIMIT data to the mfcc and label folders needed by the program. I don't know if you can provide a copy of the mfcc and Label's timit dataset download address, thank you very much.
@GreatJiweix, have you solved the problem yet? (ImportError: cannot import name 'check_path_exists') I also encounted the problem, too
@GreatJiweix, have you solved the problem yet? (ImportError: cannot import name 'check_path_exists') I also encounted the problem, too
ME too.