reading-time icon indicating copy to clipboard operation
reading-time copied to clipboard

feat: use cpm(characters per minute) for more accurate result

Open jcha0713 opened this issue 2 years ago โ€ข 6 comments

Motivation

The library treats each CJK character as a separate word. However, unlike Chinese and Japanese, Korean characters should not be treated as words as single character is often meaningless but used to form a word. For this reason, the result does not seem very accurate when computing reading time for Korean text.

I'm not an expert in Chinese nor Japanese but this might also be true for both languages and therefore there is a possibility that this library is giving wrong results for those languages as well.

As a solution, I suggest counting all CJK characters as individual characters (rather than as words) and using cpm (characters per minutes) for more accurate results. This way, we can count CJK characters and latin words separately and have two reading time values that we can simply add up.

Major changes

In this PR, I made several changes and added more test cases to ensure everything is working fine.

  1. First I changed the WordCountStats type to have two variables: words and chars instead of total. Then I replaced words in ReadingTimeResult type with counts object to group words and chars together. I also changed Options type to take optional charactersPerMinute value. The default for cpm is 500. ref: medium

  2. As mentioned above, it now calculates two different reading time values for CJK characters and non-CJK words and adds the numbers together to get minutes.

  3. I fixed a bug which was occurring when the first character of text is a punctuation.

  4. I introduced some new variables to improve the readability of code.

Another proposal

Currently, the countWords is handling links as one-word texts. For example, https://google.com or [google](https://google.com) would be treated as one word. However, I believe we should count these as multiple-word texts as it's more natural to read the link word by word. So I changed the logic to count all the words that are in the link and also altered the test cases accordingly. Please Let me know if you have any concerns with this.

I believe this PR would help CJK users to have very accurate reading time estimation. In fact, when I tested this with my blog post which was written in Korean, it gave me 13 minutes which is pretty accurate. (previously it was 28 min ๐Ÿ˜“)

jcha0713 avatar Dec 10 '22 08:12 jcha0713

Hello, @ngryman. It'a new year, so I just wanted to follow up on this pull request. I was wondering if there is any chance that it could be reviewed in the near future? I'm excited to contribute to the project and would really appreciate any feedback. Thank you for your time and consideration. I look forward to your response.

jcha0713 avatar Jan 01 '23 21:01 jcha0713

As a solution, I suggest counting all CJK characters as individual characters (rather than as words) and using cpm (characters per minutes) for more accurate results.

@jcha0713 isn't it better to use WPM for Korean and use CPM for only Japanese and Chinese?

Anyway, It's a shame this PR hasn't been merged yet.

abiriadev avatar Mar 28 '24 14:03 abiriadev

@ngryman Project dead? Please review (and merge) this MR.

macx avatar Apr 17 '24 09:04 macx

Hi: I'm co-maintaining. I'm not sure if @ngryman has time to review at all. I think it's very hard to gauge what a "word" means and whether reading speed can really be accurately measured by either "words" or "characters". Even in Chinese, I would say two-character words can be read faster than two words each of a single character. If anything, we should use Intl.Segmenter instead, which separates words by semantics, not by their string forms, which would easily solve the Korean problem. It has support from Node 16 and most browsers (except Firefox, unfortunately). This PR already contains a breaking change. Why don't we stop iterating upon this fundamentally broken counting algorithm and use something more robust?

Josh-Cena avatar Apr 17 '24 20:04 Josh-Cena

@Josh-Cena

Why don't we stop iterating upon this fundamentally broken counting algorithm and use something more robust?

Sounds possible, so I prototyped a naive implementation:

// Korean wpm example

// example text taken from Korean wikipedia: https://ko.wikipedia.org/wiki/%ED%95%9C%EA%B5%AD%EC%96%B4
const text = `ํ•œ๊ตญ์–ด(้Ÿ“ๅœ‹่ชž, ๋ฌธํ™”์–ด: ์กฐ์„ ๋ง)๋Š” ๋Œ€ํ•œ๋ฏผ๊ตญ๊ณผ ์กฐ์„ ๋ฏผ์ฃผ์ฃผ์˜์ธ๋ฏผ๊ณตํ™”๊ตญ์˜ ๊ณต์šฉ์–ด์ด๋‹ค. ๋‘˜์€ ํ‘œ๊ธฐ๋‚˜ ๋ฌธ๋ฒ•์—์„œ๋Š” ์ฐจ์ด๊ฐ€ ์—†์ง€๋งŒ ํ‘œํ˜„์—์„œ ์ฐจ์ด๊ฐ€ ์žˆ๋‹ค.`

const w = [
	...new Intl.Segmenter(undefined, {
		granularity: 'word',
	}).segment(text),
].filter(({ isWordLike }) => isWordLike).length

// Korean wpm source: https://www.jkos.org/upload/pdf/JKOS057-04-17.pdf
console.log(w / 202.3)

// result: 0.08403361344537814m โ‰ˆ 5.0420168067s

But the locale matters. If we leave locale undefined, the locale will be determined automatically by system preference, which will make it hard for unit tests to be consistent.

abiriadev avatar Apr 18 '24 05:04 abiriadev

I agree that we should use more robust method if possible. I'm not sure if measuring the reading speed based on the semantic segments is the best way, but I guess it's the most optimal solution we have as of now.

In fact, I first created a library that uses the Intl.Segmenter before I submitted this PR but abandoned due to its performance issue. That was two years ago and it's possible that my code was broken. I might have to do some more research to improve it but in the meantime, please take a look at the repo if you're interested: jcha0713/better-reading-time (sorry for the rude naming).

I'm happy to do some more work based on this if wanted.

jcha0713 avatar Apr 18 '24 05:04 jcha0713