ChineseNER
ChineseNER copied to clipboard
使用默认参数训练时出错
如题,训练模型时,出现了下面的错误调试:
Building prefix dict from the default dictionary ... Loading model from cache C:\Users\cloudy\AppData\Local\Temp\jieba.cache Loading model cost 1.237 seconds. Prefix dict has been built succesfully. Found 4313 unique words (979180 in total) Loading pretrained embeddings from wiki_100.utf8... Found 13 unique named entity tags 20864 / 0 / 4636 sentences in train / dev / test. Traceback (most recent call last): File "main.py", line 225, in <module> tf.app.run(main) File "D:\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "main.py", line 219, in main train() File "main.py", line 150, in train train_manager = BatchManager(train_data, FLAGS.batch_size) File "C:\Users\cloudy\Desktop\ChineseNER\data_utils.py", line 285, in __init__ self.batch_data = self.sort_and_pad(data, batch_size) File "C:\Users\cloudy\Desktop\ChineseNER\data_utils.py", line 293, in sort_and_pad batch_data.append(self.pad_data(sorted_data[i*batch_size: (i+1)*batch_size])) TypeError: slice indices must be integers or None or have an __index__ method
自己找了几个方法,没有解决,希望帮我解决一下,感激不尽!
遇到同样问题
是的呢,我也是 把两个false 改为true后,在main.py上面就是这个问题,该怎么解决呢
你好,请问你们解决了吗
同样遇到了原因是[i*batch_size: (i+1)*batch_size]不是int,添加int函数就搞定了
老铁,模型输入是不是没有对齐文本?
remove this line
sorted_data = sorted(data, key=lambda x: len(x[0]))
and check out, wish help
data_utils.py
line 293:
batch_data.append(self.pad_data(sorted_data[i*batch_size : (i+1)*batch_size]))
=>
batch_data.append(self.pad_data(sorted_data[int(i*batch_size) : int((i+1)*batch_size)]))