flashtext icon indicating copy to clipboard operation
flashtext copied to clipboard

something wrong in Chinese ?

Open bojone opened this issue 6 years ago • 30 comments

in python 2.7:

from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.extract_keywords(u'北京欢迎你')

return [u'北京', u'你'],missing u'欢迎' ?

bojone avatar Jan 17 '18 11:01 bojone

There is no tokeniser for Chinese in built. hence this would be happening. That's my guess. I will look at it and get back.

On Wed, 17 Jan 2018 at 17:10 苏剑林 [email protected] wrote:

in python 2.7:

from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.extract_keywords(u'北京欢迎你') return [u'北京', u'你'],missing u'欢迎' ?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/vi3k6i5/flashtext/issues/43, or mute the thread https://github.com/notifications/unsubscribe-auth/AC-NwqSniROI-GtdOlV9Z15Mx9wopkuRks5tLdwxgaJpZM4RhNhH .

vi3k6i5 avatar Jan 17 '18 14:01 vi3k6i5

does it seems ridiculous that a string matching tool must have a tokenizer ?

bojone avatar Jan 17 '18 14:01 bojone

It has tokenisation for English. Not for Chinese.

It's a one man project, and I only needed to deal with English. + I don't know any of the Chinese writing style.

This tool is mostly built with English in mind. If you want you can improve it to work with Chinese.

Sorry if this sounds like a stupid decision from my side. But you have to consider that I have spents 100+ hours on this project already for 0 pay. Can't spend all my life on this. So I had to make some decisions to simplify it.

On Wed 17 Jan, 2018, 19:42 苏剑林, [email protected] wrote:

does it seems ridiculous that a string matching tool must have a tokenizer ?

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/vi3k6i5/flashtext/issues/43#issuecomment-358316080, or mute the thread https://github.com/notifications/unsubscribe-auth/AC-NwtXOzkzBHTr5ywl-ZiIAnUu24IZLks5tLf_VgaJpZM4RhNhH .

vi3k6i5 avatar Jan 17 '18 14:01 vi3k6i5

oh, sorry, I am not blaming you.

As I know, many string matching tool work with English letter as a mini unit. I am confused that why you would design it in word level.

bojone avatar Jan 17 '18 14:01 bojone

I designed it at a character level. But when a word ends and when not that is a word level thing. For example: hi how are you? (Word ends at spaces) Where as '.net is awesome (word does not end with . Rather starts with it.)

So when does a word end for that I need to have an idea of word tokenisation.

If this is confusing let me know I will try to give a better example.

On Wed 17 Jan, 2018, 20:08 苏剑林, [email protected] wrote:

oh, sorry, I am not blaming you.

As I know, many string matching tool work with English letter as a mini unit. I am confused that why you would design it in word level.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/vi3k6i5/flashtext/issues/43#issuecomment-358323572, or mute the thread https://github.com/notifications/unsubscribe-auth/AC-NwkWK5W4fDfjPdbHVFAjq7z1GSXnJks5tLgXRgaJpZM4RhNhH .

vi3k6i5 avatar Jan 17 '18 14:01 vi3k6i5

maybe you can separatie the tokenizer and allow us to write our own tokenizer?

like https://whoosh.readthedocs.io/en/latest/analysis.html

bojone avatar Jan 18 '18 01:01 bojone

I suggest (just a suggestion ^_^) that just design it as a pure AC automata, like https://github.com/WojciechMula/pyahocorasick/ is more useful and more feasible. pyahocorasick is written in C, and I'd like to see a pure python version.

bojone avatar Jan 18 '18 01:01 bojone

Cool, Thanks for the suggestion. I will definitely take it into consideration :)

vi3k6i5 avatar Jan 18 '18 05:01 vi3k6i5

@vi3k6i5 also,there is some issues with chinese from file: processor.add_keyword_from_file('D:/keywords.txt')


UnicodeDecodeError Traceback (most recent call last) in () ----> 1 processor.add_keyword_from_file('D:/keywords.txt')

D:\Program Files\Python35\lib\site-packages\flashtext\keyword.py in add_keyword_from_file(self, keyword_file) 313 raise IOError("Invalid file path {}".format(keyword_file)) 314 with open(keyword_file)as f: --> 315 for line in f: 316 if '=>' in line: 317 keyword, clean_name = line.split('=>')

UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 26: illegal multibyte sequence

datalee avatar Jan 19 '18 01:01 datalee

@datalee You should firstly analyzed the problem instead of at the author directly. First, you have not provided the keywords.txt. And it would not be very difficult to find the reason. As indicated, it may be related to encoding of the file. For example, if you should open the file by with open('keyword_file', encodfing='utf8') as there may Chinese words in the file. It is important to find the reason by yourself instead of just pasting error and at the author.

madneal avatar Jan 19 '18 07:01 madneal

@datalee please provide the file if possible. As @neal1991 pointed out, there might be encoding issue.

vi3k6i5 avatar Jan 19 '18 07:01 vi3k6i5

Also file keywords.txt content should be in the format Documentation Link

java_2e=>java
java programing=>java
product management=>product management
product management techniques=>product management

or

java_2e
java programing
product management
product management techniques

vi3k6i5 avatar Jan 19 '18 07:01 vi3k6i5

yes,i know must be encoding issue, but i don't find add_keyword_from_file has the parameter of setting it.so......

add_keyword_from_file() got an unexpected keyword argument 'encoding'

datalee avatar Jan 19 '18 07:01 datalee

There is a pull request for this: https://github.com/vi3k6i5/flashtext/pull/40 will try and get that pushed out soon.

vi3k6i5 avatar Jan 19 '18 08:01 vi3k6i5

Fix is added in master branch:

Please do pip install -U git+https://github.com/vi3k6i5/flashtext.git

you can pass a parameter encoding when loading flashtext from a file.

vi3k6i5 avatar Jan 19 '18 08:01 vi3k6i5

@datalee let me know if that solves your problem for loading the file, and post back if there is any other issue. Thanks :)

vi3k6i5 avatar Jan 19 '18 08:01 vi3k6i5

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('简单测试')

return ['测试']

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('3测试')

return nothing

:-(

jimmydong avatar Jan 26 '18 08:01 jimmydong

I am really sorry, I can't read Chinese.

I want to help, but I genuinely can't help :( :(

I don't even know how to debug this problem :(

On Fri, 26 Jan 2018 at 13:37 JimmyDong [email protected] wrote:

keyword_processor.add_keyword('测试') keywords_found = keyword_processor.extract_keywords('简单测试')

return ['测试']

keyword_processor.add_keyword('测试') keywords_found = keyword_processor.extract_keywords('a测试')

return nothing

:-(

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/vi3k6i5/flashtext/issues/43#issuecomment-360711300, or mute the thread https://github.com/notifications/unsubscribe-auth/AC-Nwsx-F1huDhSXbyzAD09XLomiuvjgks5tOYfIgaJpZM4RhNhH .

vi3k6i5 avatar Jan 26 '18 08:01 vi3k6i5

The reason is : there is no space between Chinese words.

So, I remove digits and letters from no_word_boundaries :

self.non_word_boundaries = set(string.digits + string.ascii_letters + '_')

change to:

self.non_word_boundaries = set('_')

It works well.

jimmydong avatar Jan 26 '18 10:01 jimmydong

Ok.. cool.. let me see how I can incorporate that in the main code. If that works better then we can switch to this approach all together..

Thanks for the input.

On Fri 26 Jan, 2018, 15:42 JimmyDong, [email protected] wrote:

The reason is : there is no space between Chinese words.

So, I remove digits and letters from no_word_boundaries :

self.non_word_boundaries = set(string.digits + string.ascii_letters + '_')

change to:

self.non_word_boundaries = set(string.digits + string.ascii_letters + '_')

It works well.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/vi3k6i5/flashtext/issues/43#issuecomment-360739106, or mute the thread https://github.com/notifications/unsubscribe-auth/AC-NwpL0MF2xBp3yhQsYcd-7Xo-Txv_0ks5tOaUTgaJpZM4RhNhH .

vi3k6i5 avatar Jan 26 '18 10:01 vi3k6i5

@vi3k6i5 I think the best you can do is separate the tokenizer, no matter English or Chinese. You can allow us to design our own tokenizer and pass it into flashtext

bojone avatar Jan 27 '18 04:01 bojone

just add sentence segmention for Chinese
from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.add_keyword(u'测试')

import jieba def safe_unicode(text): """ Attempts to convert a string to unicode format """ # convert to text to be "Safe"! if isinstance(text,unicode): return text else: return text.decode('utf-8') for i in keyword_processor.extract_keywords(safe_unicode(' '.join(jieba.lcut('简单测试')))): print i for j in keyword_processor.extract_keywords(safe_unicode(' '.join(jieba.lcut('北京欢迎你')))): print j 测试 北京 欢迎 你

leepand avatar Apr 12 '18 08:04 leepand

in python 2.7:

from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.extract_keywords(u'北京欢迎你')

return [u'北京', u'你'],missing u'欢迎' ?

@leepand @leepand 你好,我也是中国的使用者,只需要修改源代码第532行 idx = sequence_end_pos ,修改为 idx = sequence_end_pos -1,即可, 代码if name == 'main': kp=KeywordProcessor() kp.add_keyword('北京') kp.add_keyword('欢迎') kp.add_keyword('你') text = '北京欢迎你' tl=kp.extract_keywords(text) print(tl)

输出:['北京', '欢迎', '你']

wuxiaobo avatar Sep 21 '18 01:09 wuxiaobo

I'm considering using Chinese characters to mimic English Words and it seems to work fine. (In python 3.6) ` string = '北 京 欢 迎 您 ! 北 京 欢 迎 您 !'

keyword_proc = KeywordProcessor()

keyword_proc.add_keyword('北 京')

keyword_proc.add_keyword('欢 迎')

keyword_proc.add_keyword('您')

keywords = keyword_proc.extract_keywords(string, span_info=True)

`

Output:

[('北 京', 0, 3), ('欢 迎', 4, 7), ('您', 8, 9), ('北 京', 12, 15), ('欢 迎', 16, 19), ('您', 20, 21)]

dolphin-zs avatar Oct 31 '18 14:10 dolphin-zs

in python 2.7: from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.extract_keywords(u'北京欢迎你') return [u'北京', u'你'],missing u'欢迎' ?

@leepand @leepand 你好,我也是中国的使用者,只需要修改源代码第532行 idx = sequence_end_pos ,修改为 idx = sequence_end_pos -1,即可, 代码if name == 'main': kp=KeywordProcessor() kp.add_keyword('北京') kp.add_keyword('欢迎') kp.add_keyword('你') text = '北京欢迎你' tl=kp.extract_keywords(text) print(tl)

输出:['北京', '欢迎', '你']

这个方法亲试确实可以

另外, 这个是在523行

ljhust avatar Feb 20 '19 09:02 ljhust

Any P.R. to fixing this issue?

leopku avatar Mar 12 '19 11:03 leopku

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('简单测试')

return ['测试']

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('3测试')

return nothing

:-(

请问汉字混数字时候识别不了的问题解决了吗? 楼下说得方法解决不了数字汉字混合的时候

Tangzy7 avatar Sep 06 '19 08:09 Tangzy7

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('简单测试')

return ['测试']

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('3测试')

return nothing :-(

请问汉字混数字时候识别不了的问题解决了吗? 楼下说得方法解决不了数字汉字混合的时候

好多坑,确实加数字时识别不了

hello-lan avatar Nov 27 '20 06:11 hello-lan

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('简单测试')

return ['测试']

keyword_processor.add_keyword('测试')
keywords_found = keyword_processor.extract_keywords('3测试')

return nothing :-(

请问汉字混数字时候识别不了的问题解决了吗? 楼下说得方法解决不了数字汉字混合的时候

好多坑,确实加数字时识别不了

You can remove number characters inside of "non word boundaries". E.g.

from flashtext import KeywordProcessor

string = '北京3欢迎'

extracter = KeywordProcessor()
extracter.set_non_word_boundaries(set('-')) # Only keep '-'
extracter.add_keyword('欢迎')
print(extracter.extract_keywords(string))

Output:

['欢迎']

sunshichen avatar Feb 03 '21 07:02 sunshichen

in python 2.7: from flashtext import KeywordProcessor keyword_processor = KeywordProcessor() keyword_processor.add_keyword(u'北京') keyword_processor.add_keyword(u'欢迎') keyword_processor.add_keyword(u'你') keyword_processor.extract_keywords(u'北京欢迎你') return [u'北京', u'你'],missing u'欢迎' ?

@leepand @leepand 你好,我也是中国的使用者,只需要修改源代码第532行 idx = sequence_end_pos ,修改为 idx = sequence_end_pos -1,即可, 代码if name == 'main': kp=KeywordProcessor() kp.add_keyword('北京') kp.add_keyword('欢迎') kp.add_keyword('你') text = '北京欢迎你' tl=kp.extract_keywords(text) print(tl)

输出:['北京', '欢迎', '你']

python3.6环境下,亲测有效,感谢!

sportzhang avatar Jun 09 '21 02:06 sportzhang