nlp_base
nlp_base copied to clipboard
自然语言基础模型
基于Xgboost的中文疑问句判别模型,我想基于你的这个去做,但是需要语料
我己完成word segmentation model,效果令人滿意,如果我希望用閣下的pos tagger去train一個基於分詞結果而判別各word tokens的pos tagger, 應該把語料轉換成怎樣的結構?
你好,我下载你的代码学习过程中,运行 /interrogative/manage.py 出现报错: ``` Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1758, in main() File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1752, in main globals = debugger.run(setup['file'], None, None, is_module) File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line...
Building prefix dict from the default dictionary ... Loading model from cache C:\Users\Lenovo\AppData\Local\Temp\jieba.cache Loading model cost 0.906 seconds. Prefix dict has been built succesfully. Traceback (most recent call last): File...
So sorry to bother you again... when I use "train()" the error occur: Building prefix dict from the default dictionary ... Loading model from cache /tmp/jieba.cache Loading model cost 0.173...