Interest in Improving Sentence Tokenization
Hi @Ayushk4 - I was suggested by @oxinabox and @aviks to ping you.
I am interested in investigating and improving the sentence tokenizers part of WordTokenizers.jl. Would that be of interest to you if I work on a PR regarding this? Thanks!
I am interested in investigating and improving the sentence tokenizers part of WordTokenizers.jl. Would that be of interest to you if I work on a PR regarding this?
Sure. Contributions are welcome.
I am not familiar with how SpaCy handles sentence splitting. Maybe we could have something similar in this package as well.
Do you have any ideas on how you want to improve the sentence tokenizer? Could you also share some samples (if possible) from your Pdf which weren't working well with these tokenizers.
I think this is one of the authoritative models in this domain.
https://www.mitpressjournals.org/doi/abs/10.1162/coli.2006.32.4.485
There may be later ones, but Punkt tokenizer of NLTK is a similar implementation.