dataset-viewer
dataset-viewer copied to clipboard
Use specific stemmer by dataset according to the language
Currently, 'porter
' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145
See https://duckdb.org/docs/extensions/full_text_search.html for more details about 'stemmer
' parameter.
In the future, we could try to identify the dataset language and use an appropriate stemmer parameter when creating the fts
index.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
P2 label should have prevented the stale bot from closing the issue. Fixed with #1635.
https://datasets-server.huggingface.co/search?dataset=HeshamHaroon%2FQA_Arabic&config=HeshamHaroon--QA_Arabic&split=train&query=%D9%85%D9%86&offset=0&limit=100 returns no result, while it should (query=من)
Same issue with https://datasets-server.huggingface.co/search?dataset=satpalsr%2FindicCorpv2&config=pa&split=train&query=%E0%A8%A1%E0%A8%B0%E0%A8%BE%E0%A8%85%E0%A8%B0%E0%A8%BE%E0%A8%82&offset=0&limit=100 (query=ਡਰਾਅਰਾਂ)
Works well on French: https://datasets-server.huggingface.co/search?dataset=allocine&config=allocine&split=train&query=quotidien&offset=0&limit=100
I tried with Arabic and Russian stemmer as in the Duckdb doc but I wasn't able to perform a simple query using FTS. I posted an issue here https://github.com/duckdb/duckdb/issues/10254
https://github.com/duckdb/duckdb/issues/10254 has been fixed, but I think we will need to solve https://github.com/huggingface/datasets-server/issues/1914 and find of a way to not break search when updating duckdb version.
good reaction time from the duckdb team!
https://pypi.org/project/duckdb/0.9.3.dev2934/ pre-release looks to have fixed FTS for non ascii characters, is this a version we can currently use? or should we wait for an official release?
let's try, I would say
Do we still need to work on this? I have seen that using porter stemmer works in other languages like Arabic and russian
https://github.com/huggingface/dataset-viewer/pull/2928 will add a specific stemmer for a dataset only if it is marked as monolingual. (That is, only one language for all splits). But there are some caveats as:
- DuckDB only supports 26 stemmer languages (See https://duckdb.org/docs/extensions/full_text_search.html#pragma-create_fts_index)
- What if a dataset supports more than one language? (Assuming we are getting the language using HfApi - card data - language) which of these languages should we use for the split? Even if the config name has the language name, we could try to infer the language for the split using tools like langdetect or fast text. In the same split, there could be different columns for different languages. I was thinking of creating one index per language (or maybe per column) and, in the end, combining all the results given the search criteria or trying to evaluate embeddings with multilingual models. (Maybe this was the idea for multilingual datasets, @lhoestq ?). Any comments? @huggingface/dataset-viewer
Starting with the monolingual sounds like the best idea, since as you explained it can be quite complex to handle multilingual datasets. The list of 26 is a good start already, and we can surely fallback on the porter stemmer.
For multilingual datasets, ideally duckdb could allow using multiple stemmers somehow ? Let's see with them I guess
To extend the list of supported languages: I've found the idea of using proxy tokenizers for languages that don't have dedicated tokenizers - for them one can use tokenizers of the closest related languages (if they have the same writing system). For example, there is a list in datatrove lib: https://github.com/huggingface/datatrove/blob/898efc0fc6ee2050f8ef78f7236cace2b26f2824/src/datatrove/utils/word_tokenizers.py#L297 (I'm questioning some choices there :D but the idea is nice).
Maybe we can use the same idea for stemmers?
maybe close this issue since it's basically done now with #2928, and open new issues if we want to support multilingual datasets, have more than 26 stemmers, or other improvements as proposed by Polina?