classifier-reborn icon indicating copy to clipboard operation
classifier-reborn copied to clipboard

Ability to specify custom tokenizer

Open ibnesayeed opened this issue 8 years ago • 6 comments

Currently, the following code is used to split the document in tokens/words for training and classification.

str.gsub(/[^\p{WORD}\s]/, '').downcase.split

This covers general case, but there could be situations where the user might want to customize the way document is split into words. For example, tokenizing Japanese text could be a whole different thing. Another situation where a custom tokenizer is needed when the user wants to train the model on N-grams (for example bi-grams such as New York). Splitting New and York from New York would mean New will be removed if it is present in stopwords. Similarly, to be or not to be is another popular example of a significant phrase fully made of common stopwords.. N-grams often play significant role in contextualizing a document and help improve the accuracy of the model in special situations. In many languages (Arabic, Persian, Urdu etc. to name a few) two or more words are combined (they are still separated by space, only put together) to form various linguistic constructs. This could be important if one wants to know who is the author of relatively small piece of text such as those posted on forums.

It would be nice if we can pass a Lambda as a tokenizer at the time of classifier initialization or some other more expressive means to tell the system how split the text.

ibnesayeed avatar Jan 16 '17 17:01 ibnesayeed

I was thinking about adding ngram support as well. I want to do this by abstracting out tokenizing to a separate public api that can get either called by the classifier, or passed in. I'm not sure which approach would be better.

Ch4s3 avatar Jan 17 '17 15:01 Ch4s3

Would the dependency injection be a good idea where we create a an instance of the tokenizer and then pass it during the initialization of the classifier, the way we do for the storage backend support.

ibnesayeed avatar Jan 17 '17 15:01 ibnesayeed

In the first post what i described was n-gram based on words, which are also called shingles. However, one can also use letter-based n-grams that are often produce good results while putting a finite upper bound on total memory used (as the maximum possible number of keys would be the number of possible letters raised to the power length of the n-gram), and could be ideal for large collection training.

ibnesayeed avatar Jan 17 '17 15:01 ibnesayeed

Yeah, I think dependency injection is the way to go here.

Ch4s3 avatar Feb 08 '17 20:02 Ch4s3

I've opened #161 but it should be resolved as a part of this tokenizer issue... Sorry I didn't research this before I open it.

piroor avatar Jun 29 '17 04:06 piroor

@piroor Thanks for hopping in!

Ch4s3 avatar Jul 31 '17 04:07 Ch4s3