ML
ML copied to clipboard
Multi Language Tokenization Support
I'm hoping that we can get to the point where we fully support the following languages.
- English
- Spanish
- German
- French
- Russian
- Japanese
- Hindi
- Farsi
- Chinese
- Arabic
I started adding unit tests for these languages for a few tokenizers here https://github.com/RubixML/ML/tree/master/tests/Tokenizers - however, it doesn't look like we support all the langugaes. I only speak English so it's hard for me to tell. Could we get some help from the community to verify that our Tokenizers support all of these languages and, if not, contribute a fix?
https://github.com/RubixML/ML/tree/master/src/Tokenizers
Thank you!
How to join the development of multiple languages? I am good at Chinese and English.
Hi @taotecode, thanks for your interest in contributing to the project! Here are the unit tests for the Tokenizers implemented in the library.
https://github.com/RubixML/ML/tree/master/tests/Tokenizers
We need help from native language speakers to ensure that we have test coverage for different languages and that the current tests are correct.
@andrewdalpino I can help with Hindi. I am not sure how it is going to work with though.
Here is the problem:
$text = "यदि कोई चीज़ काफ़ी महत्वपूर्ण है, तो आपको उसे आज़माना चाहिए। भले ही - संभावित परिणाम विफलता हो।";
$tokens = \Rubix\ML\Tokenizers\Word::tokenize($text);
Expected array:
[
'यदि', 'कोई', 'चीज़', 'महत्वपूर्ण', 'है', 'तो', 'आपको', 'उसे', 'आज़माना',
'चाहिए', 'भले', 'ही', '-', 'संभावित', 'परिणाम', 'विफलता', 'हो',
]
Actual array:
[
'यद', 'क', 'ई', 'च', 'ज', 'क', 'फ', 'महत', 'वप', 'र', 'ण', 'ह', 'त', 'आपक',
'उस', 'आज', 'म', 'न', 'च', 'ह', 'ए', 'भल', 'ह', '-', 'स', 'भ', 'व', 'त',
'पर', 'ण', 'म', 'व', 'फलत', 'ह',
]
I only tested for \Rubix\ML\Tokenizers\Word yet.
@andrewdalpino I can help with Hindi. I am not sure how it is going to work with though.
Here is the problem:
$text = "यदि कोई चीज़ काफ़ी महत्वपूर्ण है, तो आपको उसे आज़माना चाहिए। भले ही - संभावित परिणाम विफलता हो।"; $tokens = \Rubix\ML\Tokenizers\Word::tokenize($text);Expected array:
[ 'यदि', 'कोई', 'चीज़', 'महत्वपूर्ण', 'है', 'तो', 'आपको', 'उसे', 'आज़माना', 'चाहिए', 'भले', 'ही', '-', 'संभावित', 'परिणाम', 'विफलता', 'हो', ]Actual array:
[ 'यद', 'क', 'ई', 'च', 'ज', 'क', 'फ', 'महत', 'वप', 'र', 'ण', 'ह', 'त', 'आपक', 'उस', 'आज', 'म', 'न', 'च', 'ह', 'ए', 'भल', 'ह', '-', 'स', 'भ', 'व', 'त', 'पर', 'ण', 'म', 'व', 'फलत', 'ह', ]I only tested for
\Rubix\ML\Tokenizers\Wordyet.
This is because Hindi and many other languages are based on Complex Text Layout (CTL) - so you will need to account for partial words that becomes full words at the end. In general terms, they fall under complex script languages. I'm pretty sure there are many works in python for tokenizing these languages, PHP also needs one of those implementations such as hindi-tokenizer but for other languages as well to support further development.
AFAIK the tokenizers comes from NLTK and it's derivative works, there needs to be equivalent implementation in php or FFI wrapper in order to make this work.