mdBook
mdBook copied to clipboard
chinese search support
chinese search support.
set language
to zh
, then you can search chinese word.
[book]
authors = ["Zhou Yue"]
language = "zh"
multilingual = false
src = "src"
title = "trial"
@ehuss
Why is this pr not merged and there is no reply? There are many people in the Chinese community who use mdbook and can only search in English at the moment, so why hasn't this pr been merged in 10 months? Is there any reason?
Sorry, I don't have time to review all PRs.
Just a quick scan of this PR, there are a number of issues:
- The inclusion of the extra stuff needs to be conditional. For books not using chinese, it is a significant extra cost. This includes building elasticlunr, which IIRC is a large increase, and the inclusion of extra javascript.
- This PR includes formatting changes unrelated to the PR (such as indentation changes). Those should usually be separate.
- It's not clear why the extra javascript is needed. Without some sort of explanation in the PR description, it requires reverse-engineering the code, which takes a lot of time.
@ehuss Thanks for the reply, know the reason can be better to improve it.
I tried it, and it seems that extra javascript should be included as additional-js. If so, then maybe we should treat the extra javascript files in another way?
[output.html]
additional-js = [
"lunr.zh.js",
"lunr.stemmer.support.js",
]
Please add language = 'zh-CN', 'zh-HK', 'zh-TW' these aliases
chinese search support.
set
language
tozh
, then you can search chinese word.[book] authors = ["Zhou Yue"] language = "zh" multilingual = false src = "src" title = "trial"
hi,为什么我的不能搜索中文?我也设置了 "zh"
Any progress to this PR? Really need Non-English Support.
Any progress to this PR? Really need Non-English Support.
I think so too.
Is there any progress on this pr?
- It's not clear why the extra javascript is needed. Without some sort of explanation in the PR description, it requires reverse-engineering the code, which takes a lot of time.
javascript From https://github.com/MihaiValentin/lunr-languages, a better option would be to just use "lunr-languages" and no longer use "elasticlunr". Until then, it may be necessary to wait for some progress to be made on #5 .
A better option might be https://github.com/ajitid/fzf-for-js, a local search engine that supports Unicode, see https://github.com/ajitid/fzf-for-js/issues/112 for Unicode support.
Chinese is usually troublesome because there are no word breaks, meaning that the indexing must be done via either a heuristic to break up words or a natural language processor that understands the text and can break words.
Otherwise you'd need to index all individual characters as well as all pairwise combinations at least
Looking forward to new progress on this MR
Are there any new developments on this issue?
is there any progress about this issue?
@ehuss I'd like to work on this feature request.
Would creating my own fork and making a new PR be the correct way to do this?